{"id": "H-14", "title": "", "abstract": "", "keyphrases": ["popular destin", "web search interact", "improv queri", "retriev perform", "relat queri", "inform-seek experi", "queri trail", "session trail", "lookup-base approach", "log-base evalu", "user studi", "search destin", "enhanc web search"], "prmu": [], "lvl-1": "Studying the Use of Popular Destinations to Enhance Web Search Interaction Ryen W. White Microsoft Research One Microsoft Way Redmond, WA 98052 ryenw@microsoft.com Mikhail Bilenko Microsoft Research One Microsoft Way Redmond, WA 98052 mbilenko@microsoft.com Silviu Cucerzan Microsoft Research One Microsoft Way Redmond, WA 98052 silviu@microsoft.com ABSTRACT We present a novel Web search interaction feature which, for a given query, provides links to websites frequently visited by other users with similar information needs.\nThese popular destinations complement traditional search results, allowing direct navigation to authoritative resources for the query topic.\nDestinations are identified using the history of search and browsing behavior of many users over an extended time period, whose collective behavior provides a basis for computing source authority.\nWe describe a user study which compared the suggestion of destinations with the previously proposed suggestion of related queries, as well as with traditional, unaided Web search.\nResults show that search enhanced by destination suggestions outperforms other systems for exploratory tasks, with best performance obtained from mining past user behavior at query-level granularity.\nCategories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval - search process.\nGeneral Terms Human Factors, Experimentation.\n1.\nINTRODUCTION The problem of improving queries sent to Information Retrieval (IR) systems has been studied extensively in IR research [4][11].\nAlternative query formulations, known as query suggestions, can be offered to users following an initial query, allowing them to modify the specification of their needs provided to the system, leading to improved retrieval performance.\nRecent popularity of Web search engines has enabled query suggestions that draw upon the query reformulation behavior of many users to make query recommendations based on previous user interactions [10].\nLeveraging the decision-making processes of many users for query reformulation has its roots in adaptive indexing [8].\nIn recent years, applying such techniques has become possible at a much larger scale and in a different context than what was proposed in early work.\nHowever, interaction-based approaches to query suggestion may be less potent when the information need is exploratory, since a large proportion of user activity for such information needs may occur beyond search engine interactions.\nIn cases where directed searching is only a fraction of users'' information-seeking behavior, the utility of other users'' clicks over the space of top-ranked results may be limited, as it does not cover the subsequent browsing behavior.\nAt the same time, user navigation that follows search engine interactions provides implicit endorsement of Web resources preferred by users, which may be particularly valuable for exploratory search tasks.\nThus, we propose exploiting a combination of past searching and browsing user behavior to enhance users'' Web search interactions.\nBrowser plugins and proxy server logs provide access to the browsing patterns of users that transcend search engine interactions.\nIn previous work, such data have been used to improve search result ranking by Agichtein et al. [1].\nHowever, this approach only considers page visitation statistics independently of each other, not taking into account the pages'' relative positions on post-query browsing paths.\nRadlinski and Joachims [13] have utilized such collective user intelligence to improve retrieval accuracy by using sequences of consecutive query reformulations, yet their approach does not consider users'' interactions beyond the search result page.\nIn this paper, we present a user study of a technique that exploits the searching and browsing behavior of many users to suggest popular Web pages, referred to as destinations henceforth, in addition to the regular search results.\nThe destinations may not be among the topranked results, may not contain the queried terms, or may not even be indexed by the search engine.\nInstead, they are pages at which other users end up frequently after submitting same or similar queries and then browsing away from initially clicked search results.\nWe conjecture that destinations popular across a large number of users can capture the collective user experience for information needs, and our results support this hypothesis.\nIn prior work, O``Day and Jeffries [12] identified teleportation as an information-seeking strategy employed by users jumping to their previously-visited information targets, while Anderson et al. [2] applied similar principles to support the rapid navigation of Web sites on mobile devices.\nIn [19], Wexelblat and Maes describe a system to support within-domain navigation based on the browse trails of other users.\nHowever, we are not aware of such principles being applied to Web search.\nResearch in the area of recommender systems has also addressed similar issues, but in areas such as question-answering [9] and relatively small online communities [16].\nPerhaps the nearest instantiation of teleportation is search engines'' offering of several within-domain shortcuts below the title of a search result.\nWhile these may be based on user behavior and possibly site structure, the user saves at most one click from this feature.\nIn contrast, our proposed approach can transport users to locations many clicks beyond the search result, saving time and giving them a broader perspective on the available related information.\nThe conducted user study investigates the effectiveness of including links to popular destinations as an additional interface feature on search engine result pages.\nWe compare two variants of this approach against the suggestion of related queries and unaided Web search, and seek answers to questions on: (i) user preference and search effectiveness for known-item and exploratory search tasks, and (ii) the preferred distance between query and destination used to identify popular destinations from past behavior logs.\nThe results indicate that suggesting popular destinations to users attempting exploratory tasks provides best results in key aspects of the information-seeking experience, while providing query refinement suggestions is most desirable for known-item tasks.\nThe remainder of the paper is structured as follows.\nIn Section 2 we describe the extraction of search and browsing trails from user activity logs, and their use in identifying top destinations for new queries.\nSection 3 describes the design of the user study, while Sections 4 and 5 present the study findings and their discussion, respectively.\nWe conclude in Section 6 with a summary.\n2.\nSEARCH TRAILS AND DESTINATIONS We used Web activity logs containing searching and browsing activity collected with permission from hundreds of thousands of users over a five-month period between December 2005 and April 2006.\nEach log entry included an anonymous user identifier, a timestamp, a unique browser window identifier, and the URL of a visited Web page.\nThis information was sufficient to reconstruct temporally ordered sequences of viewed pages that we refer to as trails.\nIn this section, we summarize the extraction of trails, their features, and destinations (trail end-points).\nIn-depth description and analysis of trail extraction are presented in [20].\n2.1 Trail Extraction For each user, interaction logs were grouped based on browser identifier information.\nWithin each browser instance, participant navigation was summarized as a path known as a browser trail, from the first to the last Web page visited in that browser.\nLocated within some of these trails were search trails that originated with a query submission to a commercial search engine such as Google, Yahoo!, Windows Live Search, and Ask.\nIt is these search trails that we use to identify popular destinations.\nAfter originating with a query submission to a search engine, trails proceed until a point of termination where it is assumed that the user has completed their information-seeking activity.\nTrails must contain pages that are either: search result pages, search engine homepages, or pages connected to a search result page via a sequence of clicked hyperlinks.\nExtracting search trails using this methodology also goes some way toward handling multi-tasking, where users run multiple searches concurrently.\nSince users may open a new browser window (or tab) for each task [18], each task has its own browser trail, and a corresponding distinct search trail.\nTo reduce the amount of noise from pages unrelated to the active search task that may pollute our data, search trails are terminated when one of the following events occurs: (1) a user returns to their homepage, checks e-mail, logs in to an online service (e.g., MySpace or del.ico.us), types a URL or visits a bookmarked page; (2) a page is viewed for more than 30 minutes with no activity; (3) the user closes the active browser window.\nIf a page (at step i) meets any of these criteria, the trail is assumed to terminate on the previous page (i.e., step i - 1).\nThere are two types of search trails we consider: session trails and query trails.\nSession trails transcend multiple queries and terminate only when one of the three termination criteria above are satisfied.\nQuery trails use the same termination criteria as session trails, but also terminate upon submission of a new query to a search engine.\nApproximately 14 million query trails and 4 million session trails were extracted from the logs.\nWe now describe some trail features.\n2.2 Trail and Destination Analysis Table 1 presents summary statistics for the query and session trails.\nDifferences in user interaction between the last domain on the trail (Domain n) and all domains visited earlier (Domains 1 to (n - 1)) are particularly important, because they highlight the wealth of user behavior data not captured by logs of search engine interactions.\nStatistics are averages for all trails with two or more steps (i.e., those trails where at least one search result was clicked).\nTable 1.\nSummary statistics (mean averages) for search trails.\nMeasure Query trails Session trails Number of unique domains 2.0 4.3 Total page views All domains 4.8 16.2 Domains 1 to (n - 1) 1.4 10.1 Domain n (destination) 3.4 6.2 Total time spent (secs) All domains 172.6 621.8 Domains 1 to (n - 1) 70.4 397.6 Domain n (destination) 102.3 224.1 The statistics suggest that users generally browse far from the search results page (i.e., around 5 steps), and visit a range of domains during the course of their search.\nOn average, users visit 2 unique (non search-engine) domains per query trail, and just over 4 unique domains per session trail.\nThis suggests that users often do not find all the information they seek on the first domain they visit.\nFor query trails, users also visit more pages, and spend significantly longer, on the last domain in the trail compared to all previous domains combined.1 These distinctions of the last domains in the trails may indicate user interest, page utility, or page relevance.2 2.3 Destination Prediction For frequent queries, most popular destinations identified from Web activity logs could be simply stored for future lookup at search time.\nHowever, we have found that over the six-month period covered by our dataset, 56.9% of queries are unique, and 97% queries occur 10 or fewer times, accounting for 19.8% and 66.3% of all searches respectively (these numbers are comparable to those reported in previous studies of search engine query logs [15,17]).\nTherefore, a lookup-based approach would prevent us from reliably suggesting destinations for a large fraction of searches.\nTo overcome this problem, we utilize a simple term-based prediction model.\nAs discussed above, we extract two types of destinations: query destinations and session destinations.\nFor both destination types, we obtain a corpus of query-destination pairs and use it to construct term-vector representation of destinations that is analogous to the classic tf.idf document representation in traditional IR [14].\nThen, given a new query q consisting of k terms t1...tk, we identify highest-scoring destinations using the following similarity function: 1 Independent measures t-test: t(~60M) = 3.89, p < .001 2 The topical relevance of the destinations was tested for a subset of around ten thousand queries for which we had human judgments.\nThe average rating of most of the destinations lay between good and excellent.\nVisual inspection of those that did not lie in this range revealed that many were either relevant but had no judgments, or were related but had indirect query association (e.g., petfooddirect.com for query [dogs]).\n, : Where query and destination term weights, an computed using standard tf.idf weighting and que session-normalized smoothed tf.idf weighting, respec exploring alternative algorithms for the destination p remains an interesting challenge for future work, resu study described in subsequent sections demonstrate th approach provides robust, effective results.\n3.\nSTUDY To examine the usefulness of destinations, we con study investigating the perceptions and performance on four Web search systems, two with destination sug 3.1 Systems Four systems were used in this study: a baseline Web with no explicit support for query refinement (Base system with a query suggestion method that recomme queries (QuerySuggestion), and two systems that aug Web search with destination suggestions using either query trails (QueryDestination), or end-points of (SessionDestination).\n3.1.1 System 1: Baseline To establish baseline performance against which othe be compared, we developed a masked interface to a p engine without additional support in formulating q system presented the user-constructed query to the and returned ten top-ranking documents retrieved by t remove potential bias that may have been caused by perceptions, we removed all identifying information engine logos and distinguishing interface features.\n3.1.2 System 2: QuerySuggestion In addition to the basic search functionality offered QuerySuggestion provides suggestions about f refinements that searchers can make following an submission.\nThese suggestions are computed usin engine query log over the timeframe used for trail ge each target query, we retrieve two sets of candidate su contain the target query as a substring.\nOne set is com most frequent such queries, while the second set cont frequent queries that followed the target query in que candidate query is then scored by multiplying its sm frequency by its smoothed frequency of following th in past search sessions, using Laplacian smoothing.\nB scores, six top-ranked query suggestions are returned.\nsix suggestions are found, iterative backoff is per progressively longer suffixes of the target query; a si is described in [10].\nSuggestions were offered in a box positioned on the t result page, adjacent to the search results.\nFigure position of the suggestions on the page.\nFigure 1b sh view of the portion of the results page containing th offered for the query [hubble telescope].\nTo the left o nd , are ery- and userctively.\nWhile prediction task ults of the user hat this simple nducted a user of 36 subjects ggestions.\nsearch system line), a search ends additional gment baseline r end-points of session trails er systems can popular search queries.\nThis search engine the engine.\nTo subjects'' prior such as search d by Baseline, further query n initial query ng the search eneration.\nFor uggestions that mposed of 100 tains 100 most ery logs.\nEach moothed overall he target query Based on these .\nIf fewer than rformed using imilar strategy top-right of the 1a shows the hows a zoomed he suggestions of each query (a) Position of suggestions (b) Zoo Figure 1.\nQuery suggestion presentation in suggestion is an icon similar to a progress b normalized popularity.\nClicking a suggestion r results for that query.\n3.1.3 System 3: QueryDestination QueryDestination uses an interface similar t However, instead of showing query refinemen query, QueryDestination suggests up to six des visited by other users who submitted queries s one, and computed as described in the previous shows the position of the destination suggestio page.\nFigure 2b shows a zoomed view of the p page destinations suggested for the query [hubb (a) Position of destinations (b) Zoo Figure 2.\nDestination presentation in Que To keep the interface uncluttered, the page title is shown on hover over the page URL (shown to the destination name, there is a clickable icon to execute a search for the current query wi domain displayed.\nWe show destinations as a than increasing their search result rank, since deviate from the original query (e.g., those topics or not containing the original query terms 3.1.4 System 4: SessionDestination The interface functionality in SessionDestinat QueryDestination.\nThe only difference between the definition of trail end-points for queries use destinations.\nQueryDestination directs users to end up at for the active or similar que SessionDestination directs users to the domains the end of the search session that follows th queries.\nThis downgrades the effect of multi (i.e., we only care where users end up after sub rather than directing searchers to potentially irre may precede a query reformulation.\n3.2 Research Questions We were interested in determining the value of p To do this we attempt to answer the following re 3 To improve reliability, in a similar way to QueryS are only shown if their popularity exceeds a frequen med suggestions QuerySuggestion.\nbar that encodes its retrieves new search to QuerySuggestion.\nnts for the submitted stinations frequently imilar to the current s section.3 Figure 2a ons on search results portion of the results le telescope].\nmed destinations eryDestination.\ne of each destination in Figure 2b).\nNext n that allows the user ithin the destination a separate list, rather they may topically focusing on related s).\ntion is analogous to n the two systems is ed in computing top the domains others ries.\nIn contrast, s other users visit at he active or similar iple query iterations bmitting all queries), elevant domains that popular destinations.\nesearch questions: Suggestion, destinations ncy threshold.\nRQ1: Are popular destinations preferable and more effective than query refinement suggestions and unaided Web search for: a. Searches that are well-defined (known-item tasks)?\nb. Searches that are ill-defined (exploratory tasks)?\nRQ2: Should popular destinations be taken from the end of query trails or the end of session trails?\n3.3 Subjects 36 subjects (26 males and 10 females) participated in our study.\nThey were recruited through an email announcement within our organization where they hold a range of positions in different divisions.\nThe average age of subjects was 34.9 years (max=62, min=27, SD=6.2).\nAll are familiar with Web search, and conduct 7.5 searches per day on average (SD=4.1).\nThirty-one subjects (86.1%) reported general awareness of the query refinements offered by commercial Web search engines.\n3.4 Tasks Since the search task may influence information-seeking behavior [4], we made task type an independent variable in the study.\nWe constructed six known-item tasks and six open-ended, exploratory tasks that were rotated between systems and subjects as described in the next section.\nFigure 3 shows examples of the two task types.\nKnown-item task Identify three tropical storms (hurricanes and typhoons) that have caused property damage and/or loss of life.\nExploratory task You are considering purchasing a Voice Over Internet Protocol (VoIP) telephone.\nYou want to learn more about VoIP technology and providers that offer the service, and select the provider and telephone that best suits you.\nFigure 3.\nExamples of known-item and exploratory tasks.\nExploratory tasks were phrased as simulated work task situations [5], i.e., short search scenarios that were designed to reflect real-life information needs.\nThese tasks generally required subjects to gather background information on a topic or gather sufficient information to make an informed decision.\nThe known-item search tasks required search for particular items of information (e.g., activities, discoveries, names) for which the target was welldefined.\nA similar task classification has been used successfully in previous work [21].\nTasks were taken and adapted from the Text Retrieval Conference (TREC) Interactive Track [7], and questions posed on question-answering communities (Yahoo! Answers, Google Answers, and Windows Live QnA).\nTo motivate the subjects during their searches, we allowed them to select two known-item and two exploratory tasks at the beginning of the experiment from the six possibilities for each category, before seeing any of the systems or having the study described to them.\nPrior to the experiment all tasks were pilot tested with a small number of different subjects to help ensure that they were comparable in difficulty and selectability (i.e., the likelihood that a task would be chosen given the alternatives).\nPost-hoc analysis of the distribution of tasks selected by subjects during the full study showed no preference for any task in either category.\n3.5 Design and Methodology The study used a within-subjects experimental design.\nSystem had four levels (corresponding to the four experimental systems) and search tasks had two levels (corresponding to the two task types).\nSystem and task-type order were counterbalanced according to a Graeco-Latin square design.\nSubjects were tested independently and each experimental session lasted for up to one hour.\nWe adhered to the following procedure: 1.\nUpon arrival, subjects were asked to select two known-item and two exploratory tasks from the six tasks of each type.\n2.\nSubjects were given an overview of the study in written form that was read aloud to them by the experimenter.\n3.\nSubjects completed a demographic questionnaire focusing on aspects of search experience.\n4.\nFor each of the four interface conditions: a. Subjects were given an explanation of interface functionality lasting around 2 minutes.\nb. Subjects were instructed to attempt the task on the assigned system searching the Web, and were allotted up to 10 minutes to do so.\nc. Upon completion of the task, subjects were asked to complete a post-search questionnaire.\n5.\nAfter completing the tasks on the four systems, subjects answered a final questionnaire comparing their experiences on the systems.\n6.\nSubjects were thanked and compensated.\nIn the next section we present the findings of this study.\n4.\nFINDINGS In this section we use the data derived from the experiment to address our hypotheses about query suggestions and destinations, providing information on the effect of task type and topic familiarity where appropriate.\nParametric statistical testing is used in this analysis and the level of significance is set to < 0.05, unless otherwise stated.\nAll Likert scales and semantic differentials used a 5-point scale where a rating closer to one signifies more agreement with the attitude statement.\n4.1 Subject Perceptions In this section we present findings on how subjects perceived the systems that they used.\nResponses to post-search (per-system) and final questionnaires are used as the basis for our analysis.\n4.1.1 Search Process To address the first research question wanted insight into subjects'' perceptions of the search experience on each of the four systems.\nIn the post-search questionnaires, we asked subjects to complete four 5-point semantic differentials indicating their responses to the attitude statement: The search we asked you to perform was.\nThe paired stimuli offered as responses were: relaxing/stressful, interesting/ boring, restful/tiring, and easy/difficult.\nThe average obtained differential values are shown in Table 1 for each system and each task type.\nThe value corresponding to the differential All represents the mean of all three differentials, providing an overall measure of subjects'' feelings.\nTable 1.\nPerceptions of search process (lower = better).\nDifferential Known-item Exploratory B QS QD SD B QS QD SD Easy 2.6 1.6 1.7 2.3 2.5 2.6 1.9 2.9 Restful 2.8 2.3 2.4 2.6 2.8 2.8 2.4 2.8 Interesting 2.4 2.2 1.7 2.2 2.2 1.8 1.8 2 Relaxing 2.6 1.9 2 2.2 2.5 2.8 2.3 2.9 All 2.6 2 1.9 2.3 2.5 2.5 2.1 2.7 Each cell in Table 1 summarizes subject responses for 18 tasksystem pairs (18 subjects who ran a known-item task on Baseline (B), 18 subjects who ran an exploratory task on QuerySuggestion (QS), etc.).\nThe most positive response across all systems for each differential-task pair is shown in bold.\nWe applied two-way analysis of variance (ANOVA) to each differential across all four systems and two task types.\nSubjects found the search easier on QuerySuggestion and QueryDestination than the other systems for known-item tasks.4 For exploratory tasks, only searches conducted on QueryDestination were easier than on the other systems.5 Subjects indicated that exploratory tasks on the three non-baseline systems were more stressful (i.e., less relaxing) than the knownitem tasks.6 As we will discuss in more detail in Section 4.1.3, subjects regarded the familiarity of Baseline as a strength, and may have struggled to attempt a more complex task while learning a new interface feature such as query or destination suggestions.\n4.1.2 Interface Support We solicited subjects'' opinions on the search support offered by QuerySuggestion, QueryDestination, and SessionDestination.\nThe following Likert scales and semantic differentials were used: \u2022 Likert scale A: Using this system enhances my effectiveness in finding relevant information.\n(Effectiveness)7 \u2022 Likert scale B: The queries/destinations suggested helped me get closer to my information goal.\n(CloseToGoal) \u2022 Likert scale C: I would re-use the queries/destinations suggested if I encountered a similar task in the future (Re-use) \u2022 Semantic differential A: The queries/destinations suggested by the system were: relevant/irrelevant, useful/useless, appropriate/inappropriate.\nWe did not include these in the post-search questionnaire when subjects used the Baseline system as they refer to interface support options that Baseline did not offer.\nTable 2 presents the average responses for each of these scales and differentials, using the labels after each of the first three Likert scales in the bulleted list above.\nThe values for the three semantic differentials are included at the bottom of the table, as is their overall average under All.\nTable 2.\nPerceptions of system support (lower = better).\nScale / Differential Known-item Exploratory QS QD SD QS QD SD Effectiveness 2.7 2.5 2.6 2.8 2.3 2.8 CloseToGoal 2.9 2.7 2.8 2.7 2.2 3.1 Re-use 2.9 3 2.4 2.5 2.5 3.2 1 Relevant 2.6 2.5 2.8 2.4 2 3.1 2 Useful 2.6 2.7 2.8 2.7 2.1 3.1 3 Appropriate 2.6 2.4 2.5 2.4 2.4 2.6 All {1,2,3} 2.6 2.6 2.6 2.6 2.3 2.9 The results show that all three experimental systems improved subjects'' perceptions of their search effectiveness over Baseline, although only QueryDestination did so significantly.8 Further examination of the effect size (measured using Cohen``s d) revealed that QueryDestination affects search effectiveness most positively.9 QueryDestination also appears to get subjects closer to their information goal (CloseToGoal) than QuerySuggestion or 4 easy: F(3,136) = 4.71, p = .0037; Tukey post-hoc tests: all p \u2264 .008 5 easy: F(3,136) = 3.93, p = .01; Tukey post-hoc tests: all p \u2264 .012 6 relaxing: F(1,136) = 6.47, p = .011 7 This question was conditioned on subjects'' use of Baseline and their previous Web search experiences.\n8 F(3,136) = 4.07, p = .008; Tukey post-hoc tests: all p \u2264 .002 9 QS: d(K,E) = (.26, .52); QD: d(K,E) = (.77, 1.50); SD: d(K,E) = (.48, .28) SessionDestination, although only for exploratory search tasks.10 Additional comments on QuerySuggestion conveyed that subjects saw it as a convenience (to save them typing a reformulation) rather than a way to dramatically influence the outcome of their search.\nFor exploratory searches, users benefited more from being pointed to alternative information sources than from suggestions for iterative refinements of their queries.\nOur findings also show that our subjects felt that QueryDestination produced more relevant and useful suggestions for exploratory tasks than the other systems.11 All other observed differences between the systems were not statistically significant.12 The difference between performance of QueryDestination and SessionDestination is explained by the approach used to generate destinations (described in Section 2).\nSessionDestination``s recommendations came from the end of users'' session trails that often transcend multiple queries.\nThis increases the likelihood that topic shifts adversely affect their relevance.\n4.1.3 System Ranking In the final questionnaire that followed completion of all tasks on all systems, subjects were asked to rank the four systems in descending order based on their preferences.\nTable 3 presents the mean average rank assigned to each of the systems.\nTable 3.\nRelative ranking of systems (lower = better).\nSystems Baseline QSuggest QDest SDest Ranking 2.47 2.14 1.92 2.31 These results indicate that subjects preferred QuerySuggestion and QueryDestination overall.\nHowever, none of the differences between systems'' ratings are significant.13 One possible explanation for these systems being rated higher could be that although the popular destination systems performed well for exploratory searches while QuerySuggestion performed well for known-item searches, an overall ranking merges these two performances.\nThis relative ranking reflects subjects'' overall perceptions, but does not separate them for each task category.\nOver all tasks there appeared to be a slight preference for QueryDestination, but as other results show, the effect of task type on subjects'' perceptions is significant.\nThe final questionnaire also included open-ended questions that asked subjects to explain their system ranking, and describe what they liked and disliked about each system: Baseline: Subjects who preferred Baseline commented on the familiarity of the system (e.g., was familiar and I didn``t end up using suggestions (S36)).\nThose who did not prefer this system disliked the lack of support for query formulation (Can be difficult if you don``t pick good search terms (S20)) and difficulty locating relevant documents (e.g., Difficult to find what I was looking for (S13); Clunky current technology (S30)).\nQuerySuggestion: Subjects who rated QuerySuggestion highest commented on rapid support for query formulation (e.g., was useful in (1) saving typing (2) coming up with new ideas for query expansion (S12); helps me better phrase the search term (S24); made my next query easier (S21)).\nThose who did not prefer this system criticized suggestion quality (e.g., Not relevant (S11); Popular 10 F(2,102) = 5.00, p = .009; Tukey post-hoc tests: all p \u2264 .012 11 F(2,102) = 4.01, p = .01; \u03b1 = .0167 12 Tukey post-hoc tests: all p \u2265 .143 13 One-way repeated measures ANOVA: F(3,105) = 1.50, p = .22 queries weren``t what I was looking for (S18)) and the quality of results they led to (e.g., Results (after clicking on suggestions) were of low quality (S35); Ultimately unhelpful (S1)).\nQueryDestination: Subjects who preferred this system commented mainly on support for accessing new information sources (e.g., provided potentially helpful and new areas / domains to look at (S27)) and bypassing the need to browse to these pages (Useful to try to `cut to the chase'' and go where others may have found answers to the topic (S3)).\nThose who did not prefer this system commented on the lack of specificity in the suggested domains (Should just link to site-specific query, not site itself (S16); Sites were not very specific (S24); Too general/vague (S28)14 ), and the quality of the suggestions (Not relevant (S11); Irrelevant (S6)).\nSessionDestination: Subjects who preferred this system commented on the utility of the suggested domains (suggestions make an awful lot of sense in providing search assistance, and seemed to help very nicely (S5)).\nHowever, more subjects commented on the irrelevance of the suggestions (e.g., did not seem reliable, not much help (S30); Irrelevant, not my style (S21), and the related need to include explanations about why the suggestions were offered (e.g., Low-quality results, not enough information presented (S35)).\nThese comments demonstrate a diverse range of perspectives on different aspects of the experimental systems.\nWork is obviously needed in improving the quality of the suggestions in all systems, but subjects seemed to distinguish the settings when each of these systems may be useful.\nEven though all systems can at times offer irrelevant suggestions, subjects appeared to prefer having them rather than not (e.g., one subject remarked suggestions were helpful in some cases and harmless in all (S15)).\n4.1.4 Summary The findings obtained from our study on subjects'' perceptions of the four systems indicate that subjects tend to prefer QueryDestination for the exploratory tasks and QuerySuggestion for the known-item searches.\nSuggestions to incrementally refine the current query may be preferred by searchers on known-item tasks when they may have just missed their information target.\nHowever, when the task is more demanding, searchers appreciate suggestions that have the potential to dramatically influence the direction of a search or greatly improve topic coverage.\n4.2 Search Tasks To gain a better understanding of how subjects performed during the study, we analyze data captured on their perceptions of task completeness and the time that it took them to complete each task.\n4.2.1 Subject Perceptions In the post-search questionnaire, subjects were asked to indicate on a 5-point Likert scale the extent to which they agreed with the following attitude statement: I believe I have succeeded in my performance of this task (Success).\nIn addition, they were asked to complete three 5-point semantic differentials indicating their response to the attitude statement: The task we asked you to perform was: The paired stimuli offered as possible responses were clear/unclear, simple/complex, and familiar/ unfamiliar.\nTable 4 presents the mean average response to these statements for each system and task type.\n14 Although the destination systems provided support for search within a domain, subjects mainly chose to ignore this.\nTable 4.\nPerceptions of task and task success (lower = better).\nScale Known-item Exploratory B QS QD SD B QS QD SD Success 2.0 1.3 1.4 1.4 2.8 2.3 1.4 2.6 1 Clear 1.2 1.1 1.1 1.1 1.6 1.5 1.5 1.6 2 Simple 1.9 1.4 1.8 1.8 2.4 2.9 2.4 3 3 Familiar 2.2 1.9 2.0 2.2 2.6 2.5 2.7 2.7 All {1,2,3} 1.8 1.4 1.6 1.8 2.2 2.2 2.2 2.3 Subject responses demonstrate that users felt that their searches had been more successful using QueryDestination for exploratory tasks than with the other three systems (i.e., there was a two-way interaction between these two variables).15 In addition, subjects perceived a significantly greater sense of completion with knownitem tasks than with exploratory tasks.16 Subjects also found known-item tasks to be more simple, clear, and familiar.\n17 These responses confirm differences in the nature of the tasks we had envisaged when planning the study.\nAs illustrated by the examples in Figure 3, the known-item tasks required subjects to retrieve a finite set of answers (e.g., find three interesting things to do during a weekend visit to Kyoto, Japan).\nIn contrast, the exploratory tasks were multi-faceted, and required subjects to find out more about a topic or to find sufficient information to make a decision.\nThe end-point in such tasks was less well-defined and may have affected subjects'' perceptions of when they had completed the task.\nGiven that there was no difference in the tasks attempted on each system, theoretically the perception of the tasks'' simplicity, clarity, and familiarity should have been the same for all systems.\nHowever, we observe a clear interaction effect between the system and subjects'' perception of the actual tasks.\n4.2.2 Task Completion Time In addition to asking subjects to indicate the extent to which they felt the task was completed, we also monitored the time that it took them to indicate to the experimenter that they had finished.\nThe elapsed time from when the subject began issuing their first query until when they indicated that they were done was monitored using a stopwatch and recorded for later analysis.\nA stopwatch rather than system logging was used for this since we wanted to record the time regardless of system interactions.\nFigure 4 shows the average task completion time for each system and each task type.\nFigure 4.\nMean average task completion time (\u00b1 SEM).\n15 F(3,136) = 6.34, p = .001 16 F(1,136) = 18.95, p < .001 17 F(1,136) = 6.82, p = .028; Known-item tasks were also more simple on QS (F(3,136) = 3.93, p = .01; Tukey post-hoc test: p = .01); \u03b1 = .167 Known-item Exploratory 0 100 200 300 400 500 600 Task categories Baseline QSuggest Time(seconds) Systems 348.8 513.7 272.3 467.8 232.3 474.2 359.8 472.2 QDestination SDestination As can be seen in the figure above, the task completion times for the known-item tasks differ greatly between systems.18 Subjects attempting these tasks on QueryDestination and QuerySuggestion complete them in less time than subjects on Baseline and SessionDestination.19 As discussed in the previous section, subjects were more familiar with the known-item tasks, and felt they were simpler and clearer.\nBaseline may have taken longer than the other systems since users had no additional support and had to formulate their own queries.\nSubjects generally felt that the recommendations offered by SessionDestination were of low relevance and usefulness.\nConsequently, the completion time increased slightly between these two systems perhaps as the subjects assessed the value of the proposed suggestions, but reaped little benefit from them.\nThe task completion times for the exploratory tasks were approximately equal on all four systems20 , although the time on Baseline was slightly higher.\nSince these tasks had no clearly defined termination criteria (i.e., the subject decided when they had gathered sufficient information), subjects generally spent longer searching, and consulted a broader range of information sources than in the known-item tasks.\n4.2.3 Summary Analysis of subjects'' perception of the search tasks and aspects of task completion shows that the QuerySuggestion system made subjects feel more successful (and the task more simple, clear, and familiar) for the known-item tasks.\nOn the other hand, QueryDestination was shown to lead to heightened perceptions of search success and task ease, clarity, and familiarity for the exploratory tasks.\nTask completion times on both systems were significantly lower than on the other systems for known-item tasks.\n4.3 Subject Interaction We now focus our analysis on the observed interactions between searchers and systems.\nAs well as eliciting feedback on each system from our subjects, we also recorded several aspects of their interaction with each system in log files.\nIn this section, we analyze three interaction aspects: query iterations, search-result clicks, and subject engagement with the additional interface features offered by the three non-baseline systems.\n4.3.1 Queries and Result Clicks Searchers typically interact with search systems by submitting queries and clicking on search results.\nAlthough our system offers additional interface affordances, we begin this section by analyzing querying and clickthrough behavior of our subjects to better understand how they conducted core search activities.\nTable 5 shows the average number of query iterations and search results clicked for each system-task pair.\nThe average value in each cell is computed for 18 subjects on each task type and system.\nTable 5.\nAverage query iterations and result clicks (per task).\nScale Known-item Exploratory B QS QD SD B QS QD SD Queries 1.9 4.2 1.5 2.4 3.1 5.7 2.7 3.5 Result clicks 2.6 2 1.7 2.4 3.4 4.3 2.3 5.1 Subjects submitted fewer queries and clicked on fewer search results in QueryDestination than in any of the other systems.21 As 18 F(3,136) = 4.56, p = .004 19 Tukey post-hoc tests: all p \u2264 .021 20 F(3,136) = 1.06, p = .37 21 Queries: F(3,443) = 3.99; p = .008; Tukey post-hoc tests: all p \u2264 .004; Systems: F(3,431) = 3.63, p = .013; Tukey post-hoc tests: all p \u2264 .011 discussed in the previous section, subjects using this system felt more successful in their searches yet they exhibited less of the traditional query and result-click interactions required for search success on traditional search systems.\nIt may be the case that subjects'' queries on this system were more effective, but it is more likely that they interacted less with the system through these means and elected to use the popular destinations instead.\nOverall, subjects submitted most queries in QuerySuggestion, which is not surprising as this system actively encourages searchers to iteratively re-submit refined queries.\nSubjects interacted similarly with Baseline and SessionDestination systems, perhaps due to the low quality of the popular destinations in the latter.\nTo investigate this and related issues, we will next analyze usage of the suggestions on the three non-baseline systems.\n4.3.2 Suggestion Usage To determine whether subjects found additional features useful, we measure the extent to which they were used when they were provided.\nSuggestion usage is defined as the proportion of submitted queries for which suggestions were offered and at least one suggestion was clicked.\nTable 6 shows the average usage for each system and task category.\nTable 6.\nSuggestion uptake (values are percentages).\nMeasure Known-item Exploratory QS QD SD QS QD SD Usage 35.7 33.5 23.4 30.0 35.2 25.3 Results indicate that QuerySuggestion was used more for knownitem tasks than SessionDestination22 , and QueryDestination was used more than all other systems for the exploratory tasks.23 For well-specified targets in known-item search, subjects appeared to use query refinement most heavily.\nIn contrast, when subjects were exploring, they seemed to benefit most from the recommendation of additional information sources.\nSubjects selected almost twice as many destinations per query when using QueryDestination compared to SessionDestination.24 As discussed earlier, this may be explained by the lower perceived relevance and usefulness of destinations recommended by SessionDestination.\n4.3.3 Summary Analysis of log interaction data gathered during the study indicates that although subjects submitted fewer queries and clicked fewer search results on QueryDestination, their engagement with suggestions was highest on this system, particularly for exploratory search tasks.\nThe refined queries proposed by QuerySuggestion were used the most for the known-item tasks.\nThere appears to be a clear division between the systems: QuerySuggestion was preferred for known-item tasks, while QueryDestination provided most-used support for exploratory tasks.\n5.\nDISCUSSION AND IMPLICATIONS The promising findings of our study suggest that systems offering popular destinations lead to more successful and efficient searching compared to query suggestion and unaided Web search.\nSubjects seemed to prefer QuerySuggestion for the known-item tasks where the information-seeking goal was well-defined.\nIf the initial query does not retrieve relevant information, then subjects 22 F(2,355) = 4.67, p = .01; Tukey post-hoc tests: p = .006 23 Tukey``s post-hoc tests: all p \u2264 .027 24 QD: MK = 1.8, ME = 2.1; SD: MK = 1.1, ME = 1.2; F(1,231) = 5.49, p = .02; Tukey post-hoc tests: all p \u2264 .003; (M represents mean average).\nappreciate support in deciding what refinements to make to the query.\nFrom examination of the queries that subjects entered for the known-item searches across all systems, they appeared to use the initial query as a starting point, and add or subtract individual terms depending on search results.\nThe post-search questionnaire asked subjects to select from a list of proposed explanations (or offer their own explanations) as to why they used recommended query refinements.\nFor both known-item tasks and the exploratory tasks, around 40% of subjects indicated that they selected a query suggestion because they wanted to save time typing a query, while less than 10% of subjects did so because the suggestions represented new ideas.\nThus, subjects seemed to view QuerySuggestion as a time-saving convenience, rather than a way to dramatically impact search effectiveness.\nThe two variants of recommending destinations that we considered, QueryDestination and SessionDestination, offered suggestions that differed in their temporal proximity to the current query.\nThe quality of the destinations appeared to affect subjects'' perceptions of them and their task performance.\nAs discussed earlier, domains residing at the end of a complete search session (as in SessionDestination) are more likely to be unrelated to the current query, and thus are less likely to constitute valuable suggestions.\nDestination systems, in particular QueryDestination, performed best for the exploratory search tasks, where subjects may have benefited from exposure to additional information sources whose topical relevance to the search query is indirect.\nAs with QuerySuggestion, subjects were asked to offer explanations for why they selected destinations.\nOver both task types they suggested that destinations were clicked because they grabbed their attention (40%), represented new ideas (25%), or users couldn``t find what they were looking for (20%).\nThe least popular responses were wanted to save time typing the address (7%) and the destination was popular (3%).\nThe positive response to destination suggestions from the study subjects provides interesting directions for design refinements.\nWe were surprised to learn that subjects did not find the popularity bars useful, or hardly used the within-site search functionality, inviting re-design of these components.\nSubjects also remarked that they would like to see query-based summaries for each suggested destination to support more informed selection, as well as categorization of destinations with capability of drill-down for each category.\nSince QuerySuggestion and QueryDestination perform well in distinct task scenarios, integrating both in a single system is an interesting future direction.\nWe hope to deploy some of these ideas on Web scale in future systems, which will allow log-based evaluation across large user pools.\n6.\nCONCLUSIONS We presented a novel approach for enhancing users'' Web search interaction by providing links to websites frequently visited by past searchers with similar information needs.\nA user study was conducted in which we evaluated the effectiveness of the proposed technique compared with a query refinement system and unaided Web search.\nResults of our study revealed that: (i) systems suggesting query refinements were preferred for known-item tasks, (ii) systems offering popular destinations were preferred for exploratory search tasks, and (iii) destinations should be mined from the end of query trails, not session trails.\nOverall, popular destination suggestions strategically influenced searches in a way not achievable by query suggestion approaches by offering a new way to resolve information problems, and enhance the informationseeking experience for many Web searchers.\n7.\nREFERENCES [1] Agichtein, E., Brill, E. & Dumais, S. (2006).\nImproving Web search ranking by incorporating user behavior information.\nIn Proc.\nSIGIR, 19-26.\n[2] Anderson, C. et al. (2001).\nAdaptive Web navigation for wireless devices.\nIn Proc.\nIJCAI, 879-884.\n[3] Anick, P. (2003).\nUsing terminological feedback for Web search refinement: A log-based study.\nIn Proc.\nSIGIR, 88-95.\n[4] Beaulieu, M. (1997).\nExperiments with interfaces to support query expansion.\nJ. Doc.\n53, 1, 8-19.\n[5] Borlund, P. (2000).\nExperimental components for the evaluation of interactive information retrieval systems.\nJ. Doc.\n56, 1, 71-90.\n[6] Downey et al. (2007).\nModels of searching and browsing: languages, studies and applications.\nIn Proc.\nIJCAI, 1465-72.\n[7] Dumais, S.T. & Belkin, N.J. (2005).\nThe TREC interactive tracks: putting the user into search.\nIn Voorhees, E.M. and Harman, D.K. (eds.)\nTREC: Experiment and Evaluation in Information Retrieval.\nCambridge, MA: MIT Press, 123-153.\n[8] Furnas, G. W. (1985).\nExperience with an adaptive indexing scheme.\nIn Proc.\nCHI, 131-135.\n[9] Hickl, A. et al. (2006).\nFERRET: Interactive questionanswering for real-world environments.\nIn Proc.\nof COLING/ACL, 25-28.\n[10] Jones, R., et al. (2006).\nGenerating query substitutions.\nIn Proc.\nWWW, 387-396.\n[11] Koenemann, J. & Belkin, N. (1996).\nA case for interaction: a study of interactive information retrieval behavior and effectiveness.\nIn Proc.\nCHI, 205-212.\n[12] O``Day, V. & Jeffries, R. (1993).\nOrienteering in an information landscape: how information seekers get from here to there.\nIn Proc.\nCHI, 438-445.\n[13] Radlinski, F. & Joachims, T. (2005).\nQuery chains: Learning to rank from implicit feedback.\nIn Proc.\nKDD, 239-248.\n[14] Salton, G. & Buckley, C. (1988) Term-weighting approaches in automatic text retrieval.\nInf.\nProc.\nManage.\n24, 513-523.\n[15] Silverstein, C. et al. (1999).\nAnalysis of a very large Web search engine query log.\nSIGIR Forum 33, 1, 6-12.\n[16] Smyth, B. et al. (2004).\nExploiting query repetition and regularity in an adaptive community-based Web search engine.\nUser Mod.\nUser Adapt.\nInt.\n14, 5, 382-423.\n[17] Spink, A. et al. (2002).\nU.S. versus European Web searching trends.\nSIGIR Forum 36, 2, 32-38.\n[18] Spink, A., et al. (2006).\nMultitasking during Web search sessions.\nInf.\nProc.\nManage., 42, 1, 264-275.\n[19] Wexelblat, A. & Maes, P. (1999).\nFootprints: history-rich tools for information foraging.\nIn Proc.\nCHI, 270-277.\n[20] White, R.W. & Drucker, S.M. (2007).\nInvestigating behavioral variability in Web search.\nIn Proc.\nWWW, 21-30.\n[21] White, R.W. & Marchionini, G. (2007).\nExamining the effectiveness of real-time query expansion.\nInf.\nProc.\nManage.\n43, 685-704.", "lvl-3": "Studying the Use of Popular Destinations to Enhance Web Search Interaction\nABSTRACT\nWe present a novel Web search interaction feature which , for a given query , provides links to websites frequently visited by other users with similar information needs .\nThese popular destinations complement traditional search results , allowing direct navigation to authoritative resources for the query topic .\nDestinations are identified using the history of search and browsing behavior of many users over an extended time period , whose collective behavior provides a basis for computing source authority .\nWe describe a user study which compared the suggestion of destinations with the previously proposed suggestion of related queries , as well as with traditional , unaided Web search .\nResults show that search enhanced by destination suggestions outperforms other systems for exploratory tasks , with best performance obtained from mining past user behavior at query-level granularity .\n1 .\nINTRODUCTION\nThe problem of improving queries sent to Information Retrieval ( IR ) systems has been studied extensively in IR research [ 4 ] [ 11 ] .\nAlternative query formulations , known as query suggestions , can be offered to users following an initial query , allowing them to modify the specification of their needs provided to the system , leading to improved retrieval performance .\nRecent popularity of Web search engines has enabled query suggestions that draw upon the query reformulation behavior of many users to make query recommendations based on previous user interactions [ 10 ] .\nLeveraging the decision-making processes of many users for query reformulation has its roots in adaptive indexing [ 8 ] .\nIn recent years , applying such techniques has become possible at a much larger scale and in a different context than what was proposed in early work .\nHowever , interaction-based approaches to query suggestion may be less potent when the information need is exploratory , since a large proportion of user activity for such information needs may\noccur beyond search engine interactions .\nIn cases where directed searching is only a fraction of users ' information-seeking behavior , the utility of other users ' clicks over the space of top-ranked results may be limited , as it does not cover the subsequent browsing behavior .\nAt the same time , user navigation that follows search engine interactions provides implicit endorsement of Web resources preferred by users , which may be particularly valuable for exploratory search tasks .\nThus , we propose exploiting a combination of past searching and browsing user behavior to enhance users ' Web search interactions .\nBrowser plugins and proxy server logs provide access to the browsing patterns of users that transcend search engine interactions .\nIn previous work , such data have been used to improve search result ranking by Agichtein et al. [ 1 ] .\nHowever , this approach only considers page visitation statistics independently of each other , not taking into account the pages ' relative positions on post-query browsing paths .\nRadlinski and Joachims [ 13 ] have utilized such collective user intelligence to improve retrieval accuracy by using sequences of consecutive query reformulations , yet their approach does not consider users ' interactions beyond the search result page .\nIn this paper , we present a user study of a technique that exploits the searching and browsing behavior of many users to suggest popular Web pages , referred to as destinations henceforth , in addition to the regular search results .\nThe destinations may not be among the topranked results , may not contain the queried terms , or may not even be indexed by the search engine .\nInstead , they are pages at which other users end up frequently after submitting same or similar queries and then browsing away from initially clicked search results .\nWe conjecture that destinations popular across a large number of users can capture the collective user experience for information needs , and our results support this hypothesis .\nIn prior work , O'Day and Jeffries [ 12 ] identified `` teleportation '' as an information-seeking strategy employed by users jumping to their previously-visited information targets , while Anderson et al. [ 2 ] applied similar principles to support the rapid navigation of Web sites on mobile devices .\nIn [ 19 ] , Wexelblat and Maes describe a system to support within-domain navigation based on the browse trails of other users .\nHowever , we are not aware of such principles being applied to Web search .\nResearch in the area of recommender systems has also addressed similar issues , but in areas such as question-answering [ 9 ] and relatively small online communities [ 16 ] .\nPerhaps the nearest instantiation of teleportation is search engines ' offering of several within-domain shortcuts below the title of a search result .\nWhile these may be based on user behavior and possibly site structure , the user saves at most one click from this feature .\nIn contrast , our proposed approach can transport users to locations many clicks beyond the search result , saving time and giving them a broader perspective on the available related information .\nThe conducted user study investigates the effectiveness of including links to popular destinations as an additional interface feature on search engine result pages .\nWe compare two variants of this approach against the suggestion of related queries and unaided Web search , and seek answers to questions on : ( i ) user preference and search effectiveness for known-item and exploratory search tasks , and ( ii ) the preferred distance between query and destination used to identify popular destinations from past behavior logs .\nThe results indicate that suggesting popular destinations to users attempting exploratory tasks provides best results in key aspects of the information-seeking experience , while providing query refinement suggestions is most desirable for known-item tasks .\nThe remainder of the paper is structured as follows .\nIn Section 2 we describe the extraction of search and browsing trails from user activity logs , and their use in identifying top destinations for new queries .\nSection 3 describes the design of the user study , while Sections 4 and 5 present the study findings and their discussion , respectively .\nWe conclude in Section 6 with a summary .\n2 .\nSEARCH TRAILS AND DESTINATIONS\n2.1 Trail Extraction\n2.2 Trail and Destination Analysis\n2.3 Destination Prediction\n1 Independent measures t-test : t ( ~ 60M ) = 3.89 , p < .001\n3 .\nSTUDY\n3.1 Systems\n3.1.1 System 1 : Baseline\n3.1.2 System 2 : QuerySuggestion\n3.1.3 System 3 : QueryDestination\n3.1.4 System 4 : SessionDestination\n3.2 Research Questions\n3.3 Subjects\n3.4 Tasks\n3.5 Design and Methodology\n4 .\nFINDINGS\n4.1 Subject Perceptions\n4.1.1 Search Process\n4.1.2 Interface Support\n4.1.3 System Ranking\n4.1.4 Summary\n4.2 Search Tasks\n4.2.1 Subject Perceptions\n4.2.2 Task Completion Time\n4.2.3 Summary\n4.3 Subject Interaction\n4.3.1 Queries and Result Clicks\n4.3.2 Suggestion Usage\n4.3.3 Summary\n6 .\nCONCLUSIONS\nWe presented a novel approach for enhancing users ' Web search interaction by providing links to websites frequently visited by past searchers with similar information needs .\nA user study was conducted in which we evaluated the effectiveness of the proposed technique compared with a query refinement system and unaided Web search .\nResults of our study revealed that : ( i ) systems suggesting query refinements were preferred for known-item tasks , ( ii ) systems offering popular destinations were preferred for exploratory search tasks , and ( iii ) destinations should be mined from the end of query trails , not session trails .\nOverall , popular destination suggestions strategically influenced searches in a way not achievable by query suggestion approaches by offering a new way to resolve information problems , and enhance the informationseeking experience for many Web searchers .", "lvl-4": "Studying the Use of Popular Destinations to Enhance Web Search Interaction\nABSTRACT\nWe present a novel Web search interaction feature which , for a given query , provides links to websites frequently visited by other users with similar information needs .\nThese popular destinations complement traditional search results , allowing direct navigation to authoritative resources for the query topic .\nDestinations are identified using the history of search and browsing behavior of many users over an extended time period , whose collective behavior provides a basis for computing source authority .\nWe describe a user study which compared the suggestion of destinations with the previously proposed suggestion of related queries , as well as with traditional , unaided Web search .\nResults show that search enhanced by destination suggestions outperforms other systems for exploratory tasks , with best performance obtained from mining past user behavior at query-level granularity .\n1 .\nINTRODUCTION\nThe problem of improving queries sent to Information Retrieval ( IR ) systems has been studied extensively in IR research [ 4 ] [ 11 ] .\nAlternative query formulations , known as query suggestions , can be offered to users following an initial query , allowing them to modify the specification of their needs provided to the system , leading to improved retrieval performance .\nRecent popularity of Web search engines has enabled query suggestions that draw upon the query reformulation behavior of many users to make query recommendations based on previous user interactions [ 10 ] .\nLeveraging the decision-making processes of many users for query reformulation has its roots in adaptive indexing [ 8 ] .\nHowever , interaction-based approaches to query suggestion may be less potent when the information need is exploratory , since a large proportion of user activity for such information needs may\noccur beyond search engine interactions .\nIn cases where directed searching is only a fraction of users ' information-seeking behavior , the utility of other users ' clicks over the space of top-ranked results may be limited , as it does not cover the subsequent browsing behavior .\nAt the same time , user navigation that follows search engine interactions provides implicit endorsement of Web resources preferred by users , which may be particularly valuable for exploratory search tasks .\nThus , we propose exploiting a combination of past searching and browsing user behavior to enhance users ' Web search interactions .\nBrowser plugins and proxy server logs provide access to the browsing patterns of users that transcend search engine interactions .\nIn previous work , such data have been used to improve search result ranking by Agichtein et al. [ 1 ] .\nRadlinski and Joachims [ 13 ] have utilized such collective user intelligence to improve retrieval accuracy by using sequences of consecutive query reformulations , yet their approach does not consider users ' interactions beyond the search result page .\nIn this paper , we present a user study of a technique that exploits the searching and browsing behavior of many users to suggest popular Web pages , referred to as destinations henceforth , in addition to the regular search results .\nThe destinations may not be among the topranked results , may not contain the queried terms , or may not even be indexed by the search engine .\nInstead , they are pages at which other users end up frequently after submitting same or similar queries and then browsing away from initially clicked search results .\nWe conjecture that destinations popular across a large number of users can capture the collective user experience for information needs , and our results support this hypothesis .\nIn [ 19 ] , Wexelblat and Maes describe a system to support within-domain navigation based on the browse trails of other users .\nHowever , we are not aware of such principles being applied to Web search .\nPerhaps the nearest instantiation of teleportation is search engines ' offering of several within-domain shortcuts below the title of a search result .\nWhile these may be based on user behavior and possibly site structure , the user saves at most one click from this feature .\nIn contrast , our proposed approach can transport users to locations many clicks beyond the search result , saving time and giving them a broader perspective on the available related information .\nThe conducted user study investigates the effectiveness of including links to popular destinations as an additional interface feature on search engine result pages .\nWe compare two variants of this approach against the suggestion of related queries and unaided Web search , and seek answers to questions on : ( i ) user preference and search effectiveness for known-item and exploratory search tasks , and ( ii ) the preferred distance between query and destination used to identify popular destinations from past behavior logs .\nThe results indicate that suggesting popular destinations to users attempting exploratory tasks provides best results in key aspects of the information-seeking experience , while providing query refinement suggestions is most desirable for known-item tasks .\nIn Section 2 we describe the extraction of search and browsing trails from user activity logs , and their use in identifying top destinations for new queries .\nSection 3 describes the design of the user study , while Sections 4 and 5 present the study findings and their discussion , respectively .\n6 .\nCONCLUSIONS\nWe presented a novel approach for enhancing users ' Web search interaction by providing links to websites frequently visited by past searchers with similar information needs .\nA user study was conducted in which we evaluated the effectiveness of the proposed technique compared with a query refinement system and unaided Web search .\nResults of our study revealed that : ( i ) systems suggesting query refinements were preferred for known-item tasks , ( ii ) systems offering popular destinations were preferred for exploratory search tasks , and ( iii ) destinations should be mined from the end of query trails , not session trails .\nOverall , popular destination suggestions strategically influenced searches in a way not achievable by query suggestion approaches by offering a new way to resolve information problems , and enhance the informationseeking experience for many Web searchers .", "lvl-2": "Studying the Use of Popular Destinations to Enhance Web Search Interaction\nABSTRACT\nWe present a novel Web search interaction feature which , for a given query , provides links to websites frequently visited by other users with similar information needs .\nThese popular destinations complement traditional search results , allowing direct navigation to authoritative resources for the query topic .\nDestinations are identified using the history of search and browsing behavior of many users over an extended time period , whose collective behavior provides a basis for computing source authority .\nWe describe a user study which compared the suggestion of destinations with the previously proposed suggestion of related queries , as well as with traditional , unaided Web search .\nResults show that search enhanced by destination suggestions outperforms other systems for exploratory tasks , with best performance obtained from mining past user behavior at query-level granularity .\n1 .\nINTRODUCTION\nThe problem of improving queries sent to Information Retrieval ( IR ) systems has been studied extensively in IR research [ 4 ] [ 11 ] .\nAlternative query formulations , known as query suggestions , can be offered to users following an initial query , allowing them to modify the specification of their needs provided to the system , leading to improved retrieval performance .\nRecent popularity of Web search engines has enabled query suggestions that draw upon the query reformulation behavior of many users to make query recommendations based on previous user interactions [ 10 ] .\nLeveraging the decision-making processes of many users for query reformulation has its roots in adaptive indexing [ 8 ] .\nIn recent years , applying such techniques has become possible at a much larger scale and in a different context than what was proposed in early work .\nHowever , interaction-based approaches to query suggestion may be less potent when the information need is exploratory , since a large proportion of user activity for such information needs may\noccur beyond search engine interactions .\nIn cases where directed searching is only a fraction of users ' information-seeking behavior , the utility of other users ' clicks over the space of top-ranked results may be limited , as it does not cover the subsequent browsing behavior .\nAt the same time , user navigation that follows search engine interactions provides implicit endorsement of Web resources preferred by users , which may be particularly valuable for exploratory search tasks .\nThus , we propose exploiting a combination of past searching and browsing user behavior to enhance users ' Web search interactions .\nBrowser plugins and proxy server logs provide access to the browsing patterns of users that transcend search engine interactions .\nIn previous work , such data have been used to improve search result ranking by Agichtein et al. [ 1 ] .\nHowever , this approach only considers page visitation statistics independently of each other , not taking into account the pages ' relative positions on post-query browsing paths .\nRadlinski and Joachims [ 13 ] have utilized such collective user intelligence to improve retrieval accuracy by using sequences of consecutive query reformulations , yet their approach does not consider users ' interactions beyond the search result page .\nIn this paper , we present a user study of a technique that exploits the searching and browsing behavior of many users to suggest popular Web pages , referred to as destinations henceforth , in addition to the regular search results .\nThe destinations may not be among the topranked results , may not contain the queried terms , or may not even be indexed by the search engine .\nInstead , they are pages at which other users end up frequently after submitting same or similar queries and then browsing away from initially clicked search results .\nWe conjecture that destinations popular across a large number of users can capture the collective user experience for information needs , and our results support this hypothesis .\nIn prior work , O'Day and Jeffries [ 12 ] identified `` teleportation '' as an information-seeking strategy employed by users jumping to their previously-visited information targets , while Anderson et al. [ 2 ] applied similar principles to support the rapid navigation of Web sites on mobile devices .\nIn [ 19 ] , Wexelblat and Maes describe a system to support within-domain navigation based on the browse trails of other users .\nHowever , we are not aware of such principles being applied to Web search .\nResearch in the area of recommender systems has also addressed similar issues , but in areas such as question-answering [ 9 ] and relatively small online communities [ 16 ] .\nPerhaps the nearest instantiation of teleportation is search engines ' offering of several within-domain shortcuts below the title of a search result .\nWhile these may be based on user behavior and possibly site structure , the user saves at most one click from this feature .\nIn contrast , our proposed approach can transport users to locations many clicks beyond the search result , saving time and giving them a broader perspective on the available related information .\nThe conducted user study investigates the effectiveness of including links to popular destinations as an additional interface feature on search engine result pages .\nWe compare two variants of this approach against the suggestion of related queries and unaided Web search , and seek answers to questions on : ( i ) user preference and search effectiveness for known-item and exploratory search tasks , and ( ii ) the preferred distance between query and destination used to identify popular destinations from past behavior logs .\nThe results indicate that suggesting popular destinations to users attempting exploratory tasks provides best results in key aspects of the information-seeking experience , while providing query refinement suggestions is most desirable for known-item tasks .\nThe remainder of the paper is structured as follows .\nIn Section 2 we describe the extraction of search and browsing trails from user activity logs , and their use in identifying top destinations for new queries .\nSection 3 describes the design of the user study , while Sections 4 and 5 present the study findings and their discussion , respectively .\nWe conclude in Section 6 with a summary .\n2 .\nSEARCH TRAILS AND DESTINATIONS\nWe used Web activity logs containing searching and browsing activity collected with permission from hundreds of thousands of users over a five-month period between December 2005 and April 2006 .\nEach log entry included an anonymous user identifier , a timestamp , a unique browser window identifier , and the URL of a visited Web page .\nThis information was sufficient to reconstruct temporally ordered sequences of viewed pages that we refer to as `` trails '' .\nIn this section , we summarize the extraction of trails , their features , and destinations ( trail end-points ) .\nIn-depth description and analysis of trail extraction are presented in [ 20 ] .\n2.1 Trail Extraction\nFor each user , interaction logs were grouped based on browser identifier information .\nWithin each browser instance , participant navigation was summarized as a path known as a browser trail , from the first to the last Web page visited in that browser .\nLocated within some of these trails were search trails that originated with a query submission to a commercial search engine such as Google , Yahoo! , Windows Live Search , and Ask .\nIt is these search trails that we use to identify popular destinations .\nAfter originating with a query submission to a search engine , trails proceed until a point of termination where it is assumed that the user has completed their information-seeking activity .\nTrails must contain pages that are either : search result pages , search engine homepages , or pages connected to a search result page via a sequence of clicked hyperlinks .\nExtracting search trails using this methodology also goes some way toward handling multi-tasking , where users run multiple searches concurrently .\nSince users may open a new browser window ( or tab ) for each task [ 18 ] , each task has its own browser trail , and a corresponding distinct search trail .\nTo reduce the amount of `` noise '' from pages unrelated to the active search task that may pollute our data , search trails are terminated when one of the following events occurs : ( 1 ) a user returns to their homepage , checks e-mail , logs in to an online service ( e.g. , MySpace or del.ico.us ) , types a URL or visits a bookmarked page ; ( 2 ) a page is viewed for more than 30 minutes with no activity ; ( 3 ) the user closes the active browser window .\nIf a page ( at step i ) meets any of these criteria , the trail is assumed to terminate on the previous page ( i.e. , step i -- 1 ) .\nThere are two types of search trails we consider : session trails and query trails .\nSession trails transcend multiple queries and terminate only when one of the three termination criteria above are satisfied .\nQuery trails use the same termination criteria as session trails , but also terminate upon submission of a new query to a search engine .\nApproximately 14 million query trails and 4 million session trails were extracted from the logs .\nWe now describe some trail features .\n2.2 Trail and Destination Analysis\nTable 1 presents summary statistics for the query and session trails .\nDifferences in user interaction between the last domain on the trail ( Domain n ) and all domains visited earlier ( Domains 1 to ( n -- 1 ) ) are particularly important , because they highlight the wealth of user behavior data not captured by logs of search engine interactions .\nStatistics are averages for all trails with two or more steps ( i.e. , those trails where at least one search result was clicked ) .\nTable 1 .\nSummary statistics ( mean averages ) for search trails .\nThe statistics suggest that users generally browse far from the search results page ( i.e. , around 5 steps ) , and visit a range of domains during the course of their search .\nOn average , users visit 2 unique ( non search-engine ) domains per query trail , and just over 4 unique domains per session trail .\nThis suggests that users often do not find all the information they seek on the first domain they visit .\nFor query trails , users also visit more pages , and spend significantly longer , on the last domain in the trail compared to all previous domains combined .1 These distinctions of the last domains in the trails may indicate user interest , page utility , or page relevance .2\n2.3 Destination Prediction\nFor frequent queries , most popular destinations identified from Web activity logs could be simply stored for future lookup at search time .\nHowever , we have found that over the six-month period covered by our dataset , 56.9 % of queries are unique , and 97 % queries occur 10 or fewer times , accounting for 19.8 % and 66.3 % of all searches respectively ( these numbers are comparable to those reported in previous studies of search engine query logs [ 15,17 ] ) .\nTherefore , a lookup-based approach would prevent us from reliably suggesting destinations for a large fraction of searches .\nTo overcome this problem , we utilize a simple term-based prediction model .\nAs discussed above , we extract two types of destinations : query destinations and session destinations .\nFor both destination types , we obtain a corpus of query-destination pairs and use it to construct term-vector representation of destinations that is analogous to the classic tf.idf document representation in traditional IR [ 14 ] .\nThen , given a new query q consisting of k terms t1 ... tk , we identify highest-scoring destinations using the following similarity function :\n1 Independent measures t-test : t ( ~ 60M ) = 3.89 , p < .001\n2 The topical relevance of the destinations was tested for a subset of around ten thousand queries for which we had human judgments .\nThe average rating of most of the destinations lay between `` good '' and `` excellent '' .\nVisual inspection of those that did not lie in this range revealed that many were either relevant but had no judgments , or were related but had indirect query association ( e.g. , \u201cpetfooddirect.com '' for query [ dogs ] ) .\nWhere query and destination term weights , wq ( ti ) and wd ( ti ) , are computed using standard tf.idf weighting and query - and usersession-normalized smoothed tf.idf weighting , respectively .\nWhile exploring alternative algorithms for the destination prediction task remains an interesting challenge for future work , results of the user study described in subsequent sections demonstrate that this simple approach provides robust , effective results .\n( a ) Position of suggestions ( b ) Zoo med suggestions Figure 1 .\nQuery suggestion presentation in QuerySuggestion .\n3 .\nSTUDY\nTo examine the usefulness of destinations , we conducted a user study investigating the perceptions and performance of 36 subjects on four Web search systems , two with destination suggestions .\n3.1 Systems\nFour systems were used in this study : a baseline Web search system with no explicit support for query refinement ( Baseline ) , a search system with a query suggestion method that recommends additional queries ( QuerySuggestion ) , and two systems that augment baseline Web search with destination suggestions using either end-points of query trails ( QueryDestination ) , or end-points of session trails ( SessionDestination ) .\n3.1.1 System 1 : Baseline\nTo establish baseline performance against which other systems can be compared , we developed a masked interface to a popular search engine without additional support in formulating queries .\nThis system presented the user-constructed query to the search engine and returned ten top-ranking documents retrieved by the engine .\nTo remove potential bias that may have been caused by subjects ' prior perceptions , we removed all identifying information such as search engine logos and distinguishing interface features .\n3.1.2 System 2 : QuerySuggestion\nIn addition to the basic search functionality offered by Baseline , QuerySuggestion provides suggestions about further query refinements that searchers can make following an initial query submission .\nThese suggestions are computed using the search engine query log over the timeframe used for trail generation .\nFor each target query , we retrieve two sets of candidate suggestions that contain the target query as a substring .\nOne set is composed of 100 most frequent such queries , while the second set contains 100 most frequent queries that followed the target query in query logs .\nEach candidate query is then scored by multiplying its smoothed overall frequency by its smoothed frequency of following the target query in past search sessions , using Laplacian smoothing .\nBased on these scores , six top-ranked query suggestions are returned .\nIf fewer than six suggestions are found , iterative backoff is performed using progressively longer suffixes of the target query ; a similar strategy is described in [ 10 ] .\nSuggestions were offered in a box positioned on the top-right of the result page , adjacent to the search results .\nFigure 1a shows the position of the suggestions on the page .\nFigure 1b shows a zoomed view of the portion of the results page containing the suggestions offered for the query [ hubble telescope ] .\nTo the left of each query suggestion is an icon similar to a progress bar that encodes its normalized popularity .\nClicking a suggestion retrieves new search results for that query .\n3.1.3 System 3 : QueryDestination\nQueryDestination uses an interface similar to QuerySuggestion .\nHowever , instead of showing query refinements for the submitted query , QueryDestination suggests up to six destinations frequently visited by other users who submitted queries similar to the current one , and computed as described in the previous section .3 Figure 2a shows the position of the destination suggestions on search results page .\nFigure 2b shows a zoomed view of the portion of the results page destinations suggested for the query [ hubb le telescope ] .\n( a ) Position of destinations ( b ) Zoo med destinations Figure 2 .\nDestination presentation in QueryDestination .\nTo keep the interface uncluttered , the page title of each destination is shown on hover over the page URL ( shown in Figure 2b ) .\nNext to the destination name , there is a clickable icon that allows the user to execute a search for the current query within the destination domain displayed .\nWe show destinations as a separate list , rather than increasing their search result rank , since they may topically deviate from the original query ( e.g. , those focusing on related topics or not containing the original query terms ) .\n3.1.4 System 4 : SessionDestination\nThe interface functionality in SessionDestina tion is analogous to QueryDestination .\nThe only difference between the two systems is the definition of trail end-points for queries used in computing top destinations .\nQueryDestination directs users to the domains others end up at for the active or similar queries .\nIn contrast , SessionDestination directs users to the domains other users visit at the end of the search session that follows the active or similar queries .\nThis downgrades the effect of multiple query iterations ( i.e. , we only care where users end up after submitting all queries ) , rather than directing searchers to potentially irrelevant domains that may precede a query reformulation .\n3.2 Research Questions\nWe were interested in determining the value of popular destinations .\nTo do this we attempt to answer the following research questions : 3 To improve reliability , in a similar way to QuerySuggestion , destinations are only shown if their popularity exceeds a frequency threshold .\nRQ1 : Are popular destinations preferable and more effective than query refinement suggestions and unaided Web search for : a. Searches that are well-defined ( `` known-item '' tasks ) ?\nb. Searches that are ill-defined ( `` exploratory '' tasks ) ?\nRQ2 : Should popular destinations be taken from the end of query trails or the end of session trails ?\n3.3 Subjects\n36 subjects ( 26 males and 10 females ) participated in our study .\nThey were recruited through an email announcement within our organization where they hold a range of positions in different divisions .\nThe average age of subjects was 34.9 years ( max = 62 , min = 27 , SD = 6.2 ) .\nAll are familiar with Web search , and conduct 7.5 searches per day on average ( SD = 4.1 ) .\nThirty-one subjects ( 86.1 % ) reported general awareness of the query refinements offered by commercial Web search engines .\n3.4 Tasks\nSince the search task may influence information-seeking behavior [ 4 ] , we made task type an independent variable in the study .\nWe constructed six known-item tasks and six open-ended , exploratory tasks that were rotated between systems and subjects as described in the next section .\nFigure 3 shows examples of the two task types .\nYou are considering purchasing a Voice Over Internet Protocol ( VoIP ) telephone .\nYou want to learn more about VoIP technology and providers that offer the service , and select the provider and telephone that best suits you .\nFigure 3 .\nExamples of known-item and exploratory tasks .\nExploratory tasks were phrased as simulated work task situations [ 5 ] , i.e. , short search scenarios that were designed to reflect real-life information needs .\nThese tasks generally required subjects to gather background information on a topic or gather sufficient information to make an informed decision .\nThe known-item search tasks required search for particular items of information ( e.g. , activities , discoveries , names ) for which the target was welldefined .\nA similar task classification has been used successfully in previous work [ 21 ] .\nTasks were taken and adapted from the Text Retrieval Conference ( TREC ) Interactive Track [ 7 ] , and questions posed on question-answering communities ( Yahoo! Answers , Google Answers , and Windows Live QnA ) .\nTo motivate the subjects during their searches , we allowed them to select two known-item and two exploratory tasks at the beginning of the experiment from the six possibilities for each category , before seeing any of the systems or having the study described to them .\nPrior to the experiment all tasks were pilot tested with a small number of different subjects to help ensure that they were comparable in difficulty and `` selectability '' ( i.e. , the likelihood that a task would be chosen given the alternatives ) .\nPost-hoc analysis of the distribution of tasks selected by subjects during the full study showed no preference for any task in either category .\n3.5 Design and Methodology\nThe study used a within-subjects experimental design .\nSystem had four levels ( corresponding to the four experimental systems ) and search tasks had two levels ( corresponding to the two task types ) .\nSystem and task-type order were counterbalanced according to a Graeco-Latin square design .\nSubjects were tested independently and each experimental session lasted for up to one hour .\nWe adhered to the following procedure :\n1 .\nUpon arrival , subjects were asked to select two known-item and two exploratory tasks from the six tasks of each type .\n2 .\nSubjects were given an overview of the study in written form that was read aloud to them by the experimenter .\n3 .\nSubjects completed a demographic questionnaire focusing on aspects of search experience .\n4 .\nFor each of the four interface conditions : a. Subjects were given an explanation of interface functionality lasting around 2 minutes .\nb. Subjects were instructed to attempt the task on the assigned system searching the Web , and were allotted up to 10 minutes to do so .\nc. Upon completion of the task , subjects were asked to complete a post-search questionnaire .\n5 .\nAfter completing the tasks on the four systems , subjects answered a final questionnaire comparing their experiences on the systems .\n6 .\nSubjects were thanked and compensated .\nIn the next section we present the findings of this study .\n4 .\nFINDINGS\nIn this section we use the data derived from the experiment to address our hypotheses about query suggestions and destinations , providing information on the effect of task type and topic familiarity where appropriate .\nParametric statistical testing is used in this analysis and the level of significance is set to \u074c < 0.05 , unless otherwise stated .\nAll Likert scales and semantic differentials used a 5-point scale where a rating closer to one signifies more agreement with the attitude statement .\n4.1 Subject Perceptions\nIn this section we present findings on how subjects perceived the systems that they used .\nResponses to post-search ( per-system ) and final questionnaires are used as the basis for our analysis .\n4.1.1 Search Process\nTo address the first research question wanted insight into subjects ' perceptions of the search experience on each of the four systems .\nIn the post-search questionnaires , we asked subjects to complete four 5-point semantic differentials indicating their responses to the attitude statement : `` The search we asked you to perform was '' .\nThe paired stimuli offered as responses were : `` relaxing '' / `` stressful '' , `` interesting '' / `` boring '' , `` restful '' / `` tiring '' , and `` easy '' / `` difficult '' .\nThe average obtained differential values are shown in Table 1 for each system and each task type .\nThe value corresponding to the differential `` All '' represents the mean of all three differentials , providing an overall measure of subjects ' feelings .\nTable 1 .\nPerceptions of search process ( lower = better ) .\n( QS ) , etc. ) .\nThe most positive response across all systems for each differential-task pair is shown in bold .\nWe applied two-way analysis of variance ( ANOVA ) to each differential across all four systems and two task types .\nSubjects found the search easier on QuerySuggestion and QueryDestination than the other systems for known-item tasks .4 For exploratory tasks , only searches conducted on QueryDestination were easier than on the other systems .5 Subjects indicated that exploratory tasks on the three non-baseline systems were more stressful ( i.e. , less `` relaxing '' ) than the knownitem tasks .6 As we will discuss in more detail in Section 4.1.3 , subjects regarded the familiarity of Baseline as a strength , and may have struggled to attempt a more complex task while learning a new interface feature such as query or destination suggestions .\n4.1.2 Interface Support\nWe solicited subjects ' opinions on the search support offered by QuerySuggestion , QueryDestination , and SessionDestination .\nThe following Likert scales and semantic differentials were used :\n\u2022 Likert scale A : `` Using this system enhances my effectiveness in finding relevant information . ''\n( Effectiveness ) 7 \u2022 Likert scale B : `` The queries/destinations suggested helped me get closer to my information goal . ''\n( CloseToGoal ) \u2022 Likert scale C : `` I would re-use the queries/destinations suggested if I encountered a similar task in the future '' ( Re-use ) \u2022 Semantic differential A : `` The queries/destinations suggested by the system were : `` relevant '' / `` irrelevant '' , `` useful '' / `` useless '' , `` appropriate '' / `` inappropriate '' .\nWe did not include these in the post-search questionnaire when subjects used the Baseline system as they refer to interface support options that Baseline did not offer .\nTable 2 presents the average responses for each of these scales and differentials , using the labels after each of the first three Likert scales in the bulleted list above .\nThe values for the three semantic differentials are included at the bottom of the table , as is their overall average under `` All '' .\nTable 2 .\nPerceptions of system support ( lower = better ) .\nThe results show that all three experimental systems improved subjects ' perceptions of their search effectiveness over Baseline , although only QueryDestination did so significantly .8 Further examination of the effect size ( measured using Cohen 's d ) revealed that QueryDestination affects search effectiveness most positively .9 QueryDestination also appears to get subjects closer to their information goal ( CloseToGoal ) than QuerySuggestion or\nSessionDestination , although only for exploratory search tasks .10 Additional comments on QuerySuggestion conveyed that subjects saw it as a convenience ( to save them typing a reformulation ) rather than a way to dramatically influence the outcome of their search .\nFor exploratory searches , users benefited more from being pointed to alternative information sources than from suggestions for iterative refinements of their queries .\nOur findings also show that our subjects felt that QueryDestination produced more `` relevant '' and `` useful '' suggestions for exploratory tasks than the other systems .11 All other observed differences between the systems were not statistically significant .12 The difference between performance of QueryDestination and SessionDestination is explained by the approach used to generate destinations ( described in Section 2 ) .\nSessionDestination 's recommendations came from the end of users ' session trails that often transcend multiple queries .\nThis increases the likelihood that topic shifts adversely affect their relevance .\n4.1.3 System Ranking\nIn the final questionnaire that followed completion of all tasks on all systems , subjects were asked to rank the four systems in descending order based on their preferences .\nTable 3 presents the mean average rank assigned to each of the systems .\nTable 3 .\nRelative ranking of systems ( lower = better ) .\nThese results indicate that subjects preferred QuerySuggestion and QueryDestination overall .\nHowever , none of the differences between systems ' ratings are significant .13 One possible explanation for these systems being rated higher could be that although the popular destination systems performed well for exploratory searches while QuerySuggestion performed well for known-item searches , an overall ranking merges these two performances .\nThis relative ranking reflects subjects ' overall perceptions , but does not separate them for each task category .\nOver all tasks there appeared to be a slight preference for QueryDestination , but as other results show , the effect of task type on subjects ' perceptions is significant .\nThe final questionnaire also included open-ended questions that asked subjects to explain their system ranking , and describe what they liked and disliked about each system : Baseline : Subjects who preferred Baseline commented on the familiarity of the system ( e.g. , `` was familiar and I did n't end up using suggestions '' ( S36 ) ) .\nThose who did not prefer this system disliked the lack of support for query formulation ( `` Can be difficult if you do n't pick good search terms '' ( S20 ) ) and difficulty locating relevant documents ( e.g. , `` Difficult to find what I was looking for '' ( S13 ) ; `` Clunky current technology '' ( S30 ) ) .\nQuerySuggestion : Subjects who rated QuerySuggestion highest commented on rapid support for query formulation ( e.g. , `` was useful in ( 1 ) saving typing ( 2 ) coming up with new ideas for query expansion '' ( S12 ) ; `` helps me better phrase the search term '' ( S24 ) ; `` made my next query easier '' ( S21 ) ) .\nThose who did not prefer this system criticized suggestion quality ( e.g. , `` Not relevant '' ( S11 ) ; `` Popular\nqueries were n't what I was looking for '' ( S18 ) ) and the quality of results they led to ( e.g. , `` Results ( after clicking on suggestions ) were of low quality '' ( S35 ) ; `` Ultimately unhelpful '' ( S1 ) ) .\nQueryDestination : Subjects who preferred this system commented mainly on support for accessing new information sources ( e.g. , `` provided potentially helpful and new areas / domains to look at '' ( S27 ) ) and bypassing the need to browse to these pages ( `` Useful to try to ` cut to the chase ' and go where others may have found answers to the topic '' ( S3 ) ) .\nThose who did not prefer this system commented on the lack of specificity in the suggested domains ( `` Should just link to site-specific query , not site itself '' ( S16 ) ; `` Sites were not very specific '' ( S24 ) ; `` Too general/vague '' ( S28 ) 14 ) , and the quality of the suggestions ( `` Not relevant '' ( S11 ) ; `` Irrelevant '' ( S6 ) ) .\nSessionDestination : Subjects who preferred this system commented on the utility of the suggested domains ( `` suggestions make an awful lot of sense in providing search assistance , and seemed to help very nicely '' ( S5 ) ) .\nHowever , more subjects commented on the irrelevance of the suggestions ( e.g. , `` did not seem reliable , not much help '' ( S30 ) ; `` Irrelevant , not my style '' ( S21 ) , and the related need to include explanations about why the suggestions were offered ( e.g. , `` Low-quality results , not enough information presented '' ( S35 ) ) .\nThese comments demonstrate a diverse range of perspectives on different aspects of the experimental systems .\nWork is obviously needed in improving the quality of the suggestions in all systems , but subjects seemed to distinguish the settings when each of these systems may be useful .\nEven though all systems can at times offer irrelevant suggestions , subjects appeared to prefer having them rather than not ( e.g. , one subject remarked `` suggestions were helpful in some cases and harmless in all '' ( S15 ) ) .\n4.1.4 Summary\nThe findings obtained from our study on subjects ' perceptions of the four systems indicate that subjects tend to prefer QueryDestination for the exploratory tasks and QuerySuggestion for the known-item searches .\nSuggestions to incrementally refine the current query may be preferred by searchers on known-item tasks when they may have just missed their information target .\nHowever , when the task is more demanding , searchers appreciate suggestions that have the potential to dramatically influence the direction of a search or greatly improve topic coverage .\n4.2 Search Tasks\nTo gain a better understanding of how subjects performed during the study , we analyze data captured on their perceptions of task completeness and the time that it took them to complete each task .\n4.2.1 Subject Perceptions\nIn the post-search questionnaire , subjects were asked to indicate on a 5-point Likert scale the extent to which they agreed with the following attitude statement : `` I believe I have succeeded in my performance of this task '' ( Success ) .\nIn addition , they were asked to complete three 5-point semantic differentials indicating their response to the attitude statement : `` The task we asked you to perform was : '' The paired stimuli offered as possible responses were `` clear '' / `` unclear '' , `` simple '' / `` complex '' , and `` familiar '' / `` unfamiliar '' .\nTable 4 presents the mean average response to these statements for each system and task type .\nTable 4 .\nPerceptions of task and task success ( lower = better ) .\nSubject responses demonstrate that users felt that their searches had been more successful using QueryDestination for exploratory tasks than with the other three systems ( i.e. , there was a two-way interaction between these two variables ) .15 In addition , subjects perceived a significantly greater sense of completion with knownitem tasks than with exploratory tasks .16 Subjects also found known-item tasks to be more `` simple '' , `` clear '' , and `` familiar '' .\n17 These responses confirm differences in the nature of the tasks we had envisaged when planning the study .\nAs illustrated by the examples in Figure 3 , the known-item tasks required subjects to retrieve a finite set of answers ( e.g. , `` find three interesting things to do during a weekend visit to Kyoto , Japan '' ) .\nIn contrast , the exploratory tasks were multi-faceted , and required subjects to find out more about a topic or to find sufficient information to make a decision .\nThe end-point in such tasks was less well-defined and may have affected subjects ' perceptions of when they had completed the task .\nGiven that there was no difference in the tasks attempted on each system , theoretically the perception of the tasks ' simplicity , clarity , and familiarity should have been the same for all systems .\nHowever , we observe a clear interaction effect between the system and subjects ' perception of the actual tasks .\n4.2.2 Task Completion Time\nIn addition to asking subjects to indicate the extent to which they felt the task was completed , we also monitored the time that it took them to indicate to the experimenter that they had finished .\nThe elapsed time from when the subject began issuing their first query until when they indicated that they were done was monitored using a stopwatch and recorded for later analysis .\nA stopwatch rather than system logging was used for this since we wanted to record the time regardless of system interactions .\nFigure 4 shows the average task completion time for each system and each task type .\nFigure 4 .\nMean average task completion time ( \u00b1 SEM ) .\nAs can be seen in the figure above , the task completion times for the known-item tasks differ greatly between systems .18 Subjects attempting these tasks on QueryDestination and QuerySuggestion complete them in less time than subjects on Baseline and SessionDestination .19 As discussed in the previous section , subjects were more familiar with the known-item tasks , and felt they were simpler and clearer .\nBaseline may have taken longer than the other systems since users had no additional support and had to formulate their own queries .\nSubjects generally felt that the recommendations offered by SessionDestination were of low relevance and usefulness .\nConsequently , the completion time increased slightly between these two systems perhaps as the subjects assessed the value of the proposed suggestions , but reaped little benefit from them .\nThe task completion times for the exploratory tasks were approximately equal on all four systems20 , although the time on Baseline was slightly higher .\nSince these tasks had no clearly defined termination criteria ( i.e. , the subject decided when they had gathered sufficient information ) , subjects generally spent longer searching , and consulted a broader range of information sources than in the known-item tasks .\n4.2.3 Summary\nAnalysis of subjects ' perception of the search tasks and aspects of task completion shows that the QuerySuggestion system made subjects feel more successful ( and the task more `` simple '' , `` clear '' , and `` familiar '' ) for the known-item tasks .\nOn the other hand , QueryDestination was shown to lead to heightened perceptions of search success and task ease , clarity , and familiarity for the exploratory tasks .\nTask completion times on both systems were significantly lower than on the other systems for known-item tasks .\n4.3 Subject Interaction\nWe now focus our analysis on the observed interactions between searchers and systems .\nAs well as eliciting feedback on each system from our subjects , we also recorded several aspects of their interaction with each system in log files .\nIn this section , we analyze three interaction aspects : query iterations , search-result clicks , and subject engagement with the additional interface features offered by the three non-baseline systems .\n4.3.1 Queries and Result Clicks\nSearchers typically interact with search systems by submitting queries and clicking on search results .\nAlthough our system offers additional interface affordances , we begin this section by analyzing querying and clickthrough behavior of our subjects to better understand how they conducted core search activities .\nTable 5 shows the average number of query iterations and search results clicked for each system-task pair .\nThe average value in each cell is computed for 18 subjects on each task type and system .\nTable 5 .\nAverage query iterations and result clicks ( per task ) .\ndiscussed in the previous section , subjects using this system felt more successful in their searches yet they exhibited less of the traditional query and result-click interactions required for search success on traditional search systems .\nIt may be the case that subjects ' queries on this system were more effective , but it is more likely that they interacted less with the system through these means and elected to use the popular destinations instead .\nOverall , subjects submitted most queries in QuerySuggestion , which is not surprising as this system actively encourages searchers to iteratively re-submit refined queries .\nSubjects interacted similarly with Baseline and SessionDestination systems , perhaps due to the low quality of the popular destinations in the latter .\nTo investigate this and related issues , we will next analyze usage of the suggestions on the three non-baseline systems .\n4.3.2 Suggestion Usage\nTo determine whether subjects found additional features useful , we measure the extent to which they were used when they were provided .\nSuggestion usage is defined as the proportion of submitted queries for which suggestions were offered and at least one suggestion was clicked .\nTable 6 shows the average usage for each system and task category .\nTable 6 .\nSuggestion uptake ( values are percentages ) .\nResults indicate that QuerySuggestion was used more for knownitem tasks than SessionDestination22 , and QueryDestination was used more than all other systems for the exploratory tasks .23 For well-specified targets in known-item search , subjects appeared to use query refinement most heavily .\nIn contrast , when subjects were exploring , they seemed to benefit most from the recommendation of additional information sources .\nSubjects selected almost twice as many destinations per query when using QueryDestination compared to SessionDestination .24 As discussed earlier , this may be explained by the lower perceived relevance and usefulness of destinations recommended by SessionDestination .\n4.3.3 Summary\nAnalysis of log interaction data gathered during the study indicates that although subjects submitted fewer queries and clicked fewer search results on QueryDestination , their engagement with suggestions was highest on this system , particularly for exploratory search tasks .\nThe refined queries proposed by QuerySuggestion were used the most for the known-item tasks .\nThere appears to be a clear division between the systems : QuerySuggestion was preferred for known-item tasks , while QueryDestination provided most-used support for exploratory tasks .\n6 .\nCONCLUSIONS\nWe presented a novel approach for enhancing users ' Web search interaction by providing links to websites frequently visited by past searchers with similar information needs .\nA user study was conducted in which we evaluated the effectiveness of the proposed technique compared with a query refinement system and unaided Web search .\nResults of our study revealed that : ( i ) systems suggesting query refinements were preferred for known-item tasks , ( ii ) systems offering popular destinations were preferred for exploratory search tasks , and ( iii ) destinations should be mined from the end of query trails , not session trails .\nOverall , popular destination suggestions strategically influenced searches in a way not achievable by query suggestion approaches by offering a new way to resolve information problems , and enhance the informationseeking experience for many Web searchers ."} {"id": "J-11", "title": "", "abstract": "", "keyphrases": ["algorithm game theori", "market", "trade network", "buyer and seller interact", "initi endow of monei", "bid price", "perfect competit", "benefit", "maximum and minimum amount", "econom and financ", "strateg behavior of trader", "complementari slack", "monopoli", "trade network"], "prmu": [], "lvl-1": "Trading Networks with Price-Setting Agents Larry Blume Dept. of Economics Cornell University, Ithaca NY lb19@cs.cornell.edu David Easley Dept. of Economics Cornell University, Ithaca NY dae3@cs.cornell.edu Jon Kleinberg Dept. of Computer Science Cornell University, Ithaca NY kleinber@cs.cornell.edu \u00b4Eva Tardos Dept. of Computer Science Cornell University, Ithaca NY eva@cs.cornell.edu ABSTRACT In a wide range of markets, individual buyers and sellers often trade through intermediaries, who determine prices via strategic considerations.\nTypically, not all buyers and sellers have access to the same intermediaries, and they trade at correspondingly different prices that reflect their relative amounts of power in the market.\nWe model this phenomenon using a game in which buyers, sellers, and traders engage in trade on a graph that represents the access each buyer and seller has to the traders.\nIn this model, traders set prices strategically, and then buyers and sellers react to the prices they are offered.\nWe show that the resulting game always has a subgame perfect Nash equilibrium, and that all equilibria lead to an efficient (i.e. socially optimal) allocation of goods.\nWe extend these results to a more general type of matching market, such as one finds in the matching of job applicants and employers.\nFinally, we consider how the profits obtained by the traders depend on the underlying graph - roughly, a trader can command a positive profit if and only if it has an essential connection in the network structure, thus providing a graph-theoretic basis for quantifying the amount of competition among traders.\nOur work differs from recent studies of how price is affected by network structure through our modeling of price-setting as a strategic activity carried out by a subset of agents in the system, rather than studying prices set via competitive equilibrium or by a truthful mechanism.\nCategories and Subject Descriptors J.4 [Social and Behavioral Sciences]: Economics General Terms Economics, Theory 1.\nINTRODUCTION In a range of settings where markets mediate the interactions of buyers and sellers, one observes several recurring properties: Individual buyers and sellers often trade through intermediaries, not all buyers and sellers have access to the same intermediaries, and not all buyers and sellers trade at the same price.\nOne example of this setting is the trade of agricultural goods in developing countries.\nGiven inadequate transportation networks, and poor farmers'' limited access to capital, many farmers have no alternative to trading with middlemen in inefficient local markets.\nA developing country may have many such partially overlapping markets existing alongside modern efficient markets [2].\nFinancial markets provide a different example of a setting with these general characteristics.\nIn these markets much of the trade between buyers and sellers is intermediated by a variety of agents ranging from brokers to market makers to electronic trading systems.\nFor many assets there is no one market; trade in a single asset may occur simultaneously on the floor of an exchange, on crossing networks, on electronic exchanges, and in markets in other countries.\nSome buyers and sellers have access to many or all of these trading venues; others have access to only one or a few of them.\nThe price at which the asset trades may differ across these trading venues.\nIn fact, there is no price as different traders pay or receive different prices.\nIn many settings there is also a gap between the price a buyer pays for an asset, the ask price, and the price a seller receives for the asset, the bid price.\nOne of the most striking examples of this phenomenon occurs in the market for foreign exchange, where there is an interbank market with restricted access and a retail market with much more open access.\nSpreads, defined as the difference between bid and ask prices, differ significantly across these markets, even though the same asset is being traded in the two markets.\nIn this paper, we develop a framework in which such phenomena emerge from a game-theoretic model of trade, with buyers, sellers, and traders interacting on a network.\nThe edges of the network connect traders to buyers and sellers, and thus represent the access that different market participants have to one another.\nThe traders serve as intermediaries in a two-stage trading game: they strategically choose bid and ask prices to offer to the sellers and buyers they are connected to; the sellers and buyers then react to the prices they face.\nThus, the network encodes the relative power in the structural positions of the market participants, including the implicit levels of competition among traders.\nWe show that this game always has a 143 subgame perfect Nash equilibrium, and that all equilibria lead to an efficient (i.e. socially optimal) allocation of goods.\nWe also analyze how trader profits depend on the network structure, essentially characterizing in graph-theoretic terms how a trader``s payoff is determined by the amount of competition it experiences with other traders.\nOur work here is connected to several lines of research in economics, finance, and algorithmic game theory, and we discuss these connections in more detail later in the introduction.\nAt a general level, our approach can be viewed as synthesizing two important strands of work: one that treats buyer-seller interaction using network structures, but without attempting to model the processses by which prices are actually formed [1, 4, 5, 6, 8, 9, 10, 13]; and another strand in the literature on market microstructure that incorporates price-setting intermediaries, but without network-type constraints on who can trade with whom [12].\nBy developing a network model that explicitly includes traders as price-setting agents, in a system together with buyers and sellers, we are able to capture price formation in a network setting as a strategic process carried out by intermediaries, rather than as the result of a centrally controlled or exogenous mechanism.\nThe Basic Model: Indistinguishable Goods.\nOur goal in formulating the model is to express the process of price-setting in markets such as those discussed above, where the participants do not all have uniform access to one another.\nWe are given a set B of buyers, a set S of sellers, and a set T of traders.\nThere is an undirected graph G that indicates who is able to trade with whom.\nAll edges have one end in B \u222a S and the other in T; that is, each edge has the form (i, t) for i \u2208 S and t \u2208 T, or (j, t) for j \u2208 B and t \u2208 T.\nThis reflects the constraints that all buyer-seller transactions go through traders as intermediaries.\nIn the most basic version of the model, we consider identical goods, one copy of which is initially held by each seller.\nBuyers and sellers each have a value for one copy of the good, and we assume that these values are common knowledge.\nWe will subsequently generalize this to a setting in which goods are distinguishable, buyers can value different goods differently, and potentially sellers can value transactions with different buyers differently as well.\nHaving different buyer valuations captures settings like house purchases; adding different seller valuations as well captures matching markets - for example, sellers as job applicants and buyers as employers, with both caring about who ends up with which good (and with traders acting as services that broker the job search).\nThus, to start with the basic model, there is a single type of good; the good comes in individisible units; and each seller initially holds one unit of the good.\nAll three types of agents value money at the same rate; and each i \u2208 B \u222a S additionally values one copy of the good at \u03b8i units of money.\nNo agent wants more than one copy of the good, so additional copies are valued at 0.\nEach agent has an initial endowment of money that is larger than any individual valuation \u03b8i; the effect of this is to guarantee that any buyer who ends up without a copy of the good has been priced out of the market due to its valuation and network position, not a lack of funds.\nWe picture each good that is sold flowing along a sequence of two edges: from a seller to a trader, and then from the trader to a buyer.\nThe particular way in which goods flow is determined by the following game.\nFirst, each trader offers a bid price to each seller it is connected to, and an ask price to each buyer it is connected to.\nSellers and buyers then choose from among the offers presented to them by traders.\nIf multiple traders propose the same price to a seller or buyer, then there is no strict best response for the seller or buyer.\nIn this case a selection must be made, and, as is standard (see for example [10]), we (the modelers) choose among the best offers.\nFinally, each trader buys a copy of the good from each seller that accepts its offer, and it sells a copy of the good to each buyer that accepts its offer.\nIf a particular trader t finds that more buyers than sellers accept its offers, then it has committed to provide more copies of the good than it has received, and we will say that this results in a large penalty to the trader for defaulting; the effect of this is that in equilibrium, no trader will choose bid and ask prices that result in a default.\nMore precisely, a strategy for each trader t is a specification of a bid price \u03b2ti for each seller i to which t is connected, and an ask price \u03b1tj for each buyer j to which t is connected.\n(We can also handle a model in which a trader may choose not to make an offer to certain of its adjacent sellers or buyers.)\nEach seller or buyer then chooses at most one incident edge, indicating the trader with whom they will transact, at the indicated price.\n(The choice of a single edge reflects the facts that (a) sellers each initially have only one copy of the good, and (b) buyers each only want one copy of the good.)\nThe payoffs are as follows: For each seller i, the payoff from selecting trader t is \u03b2ti, while the payoff from selecting no trader is \u03b8i.\n(In the former case, the seller receives \u03b2ti units of money, while in the latter it keeps its copy of the good, which it values at \u03b8i.)\nFor each buyer j, the payoff from selecting trader t is \u03b8j \u2212\u03b1tj, whle the payoff from selecting no trader is 0.\n(In the former case, the buyer receives the good but gives up \u03b1tj units of money.)\nFor each trader t, with accepted offers from sellers i1, ... , is and buyers j1, ... , jb, the payoff is P r \u03b1tjr \u2212 P r \u03b2tir , minus a penalty \u03c0 if b > s.\nThe penalty is chosen to be large enough that a trader will never incur it in equilibrium, and hence we will generally not be concerned with the penalty.\nThis defines the basic elements of the game.\nThe equilibrium concept we use is subgame perfect Nash equilibrium.\nSome Examples.\nTo help with thinking about the model, we now describe three illustrative examples, depicted in Figure 1.\nTo keep the figures from getting too cluttered, we adopt the following conventions: sellers are drawn as circles in the leftmost column and will be named i1, i2, ... from top to bottom; traders are drawn as squares in the middle column and will be named t1, t2, ... from top to bottom; and buyers are drawn as circles in the rightmost column and will be named j1, j2, ... from top to bottom.\nAll sellers in the examples will have valuations for the good equal to 0; the valuation of each buyer is drawn inside its circle; and the bid or ask price on each edge is drawn on top of the edge.\nIn Figure 1(a), we show how a standard second-price auction arises naturally from our model.\nSuppose the buyer valuations from top to bottom are w > x > y > z.\nThe bid and ask prices shown are consistent with an equilibrium in which i1 and j1 accept the offers of trader t1, and no other buyer accepts the offer of its adjacent trader: thus, trader t1 receives the good with a bid price of x, and makes w \u2212 x by selling the good to buyer j1 for w.\nIn this way, we can consider this particular instance as an auction for a single good in which the traders act as proxies for their adjacent buyers.\nThe buyer with the highest valuation for the good ends up with it, and the surplus is divided between the seller and the associated trader.\nNote that one can construct a k-unit auction with > k buyers just as easily, by building a complete bipartite graph on k sellers and traders, and then attaching each trader to a single distinct buyer.\nIn Figure 1(b), we show how nodes with different positions in the network topology can achieve different payoffs, even when all 144 w x y z x w x x y y z z (a) Auction 1 1 1 0 x x 0 1 x x 1 (b) Heterogeneous outcomes 1 1 1 0 x x 0 1 x x 1 (c) Implicit perfect competition Figure 1: (a) An auction, mediated by traders, in which the buyer with the highest valuation for the good ends up with it.\n(b) A network in which the middle seller and buyer benefit from perfect competition between the traders, while the other sellers and buyers have no power due to their position in the network.\n(c) A form of implicit perfect competition: all bid/ask spreads will be zero in equilibrium, even though no trader directly competes with any other trader for the same buyer-seller pair.\nbuyer valuations are the same numerically.\nSpecifically, seller i2 and buyer j2 occupy powerful positions, because the two traders are competing for their business; on the other hand, the other sellers and buyers are in weak positions, because they each have only one option.\nAnd indeed, in every equilibrium, there is a real number x \u2208 [0, 1] such that both traders offer bid and ask prices of x to i2 and j2 respectively, while they offer bids of 0 and asks of 1 to the other sellers and buyers.\nThus, this example illustrates a few crucial ingredients that we will identify at a more general level shortly.\nSpecifically, i2 and j2 experience the benefits of perfect competition, in that the two traders drive the bid-ask spreads to 0 in competing for their business.\nOn the other hand, the other sellers and buyers experience the downsides of monopoly - they receive 0 payoff since they have only a single option for trade, and the corresponding trader makes all the profit.\nNote further how this natural behavior emerges from the fact that traders are able to offer different prices to different agents - capturing the fact that there is no one fixed price in the kinds of markets that motivate the model, but rather different prices reflecting the relative power of the different agents involved.\nThe previous example shows perhaps the most natural way in which a trader``s profit on a particular transaction can drop to 0: when there is another trader who can replicate its function precisely.\n(In that example, two traders each had the ability to move a copy of the good from i2 to j2.)\nBut as our subsequent results will show, traders make zero profit more generally due to global, graph-theoretic reasons.\nThe example in Figure 1(c) gives an initial indication of this: one can show that for every equilibrium, there is a y \u2208 [0, 1] such that every bid and every ask price is equal to y.\nIn other words, all traders make zero profit, whether or not a copy of the good passes through them - and yet, no two traders have any seller-buyer paths in common.\nThe price spreads have been driven to zero by a global constraint imposed by the long cycle through all the agents; this is an example of implicit perfect competition determined by the network topology.\nExtending the Model to Distinguishable Goods.\nWe extend the basic model to a setting with distinguishable goods, as follows.\nInstead of having each agent i \u2208 B \u222a S have a single numerical valuation \u03b8i, we index valuations by pairs of buyers and sellers: if buyer j obtains the good initially held by seller i, it gets a utility of \u03b8ji, and if seller i sells its good to buyer j, it experiences a loss of utility of \u03b8ij .\nThis generalizes the case of indistinguishable goods, since we can always have these pairwise valuations depend only on one of the indices.\nA strategy for a trader now consists of offering a bid to each seller that specifies both a price and a buyer, and offering an ask to each buyer that specifies both a price and a seller.\n(We can also handle a model in which a trader offers bids (respectively, asks) in the form of vectors, essentially specifying a menu with a price attached to each buyer (resp.\nseller).)\nEach buyer and seller selects an offer from an adjacent trader, and the payoffs to all agents are determined as before.\nThis general framework captures matching markets [10, 13]: for example, a job market that is mediated by agents or employment search services (as in hiring for corporate executives, or sports or entertainment figures).\nHere the sellers are job applicants, buyers are employers, and traders are the agents that mediate the job market.\nOf course, if one specifies pairwise valuations on buyers but just single valuations for sellers, we model a setting where buyers can distinguish among the goods, but sellers don``t care whom they sell to - this (roughly) captures settings like housing markets.\nOur Results.\nOur results will identify general forms of some of the principles noted in the examples discussed above - including the question of which buyers end up with the good; the question of how payoffs are differently realized by sellers, traders, and buyers; and the question of what structural properties of the network determine whether the traders will make positive profits.\nTo make these precise, we introduce the following notation.\nAny outcome of the game determines a final allocation of goods to some of the agents; this can be specified by a collection M of triples (ie, te, je), where ie \u2208 S, te \u2208 T, and je \u2208 B; moreover, each seller and each buyer appears in at most one triple.\nThe meaning is for each e \u2208 M, the good initially held by ie moves to je through te.\n(Sellers appearing in no triple keep their copy of the good.)\nWe say that the value of the allocation is equal to P e\u2208M \u03b8jeie \u2212 \u03b8ieje .\nLet \u03b8\u2217 denote the maximum value of any allocation M that is feasible given the network.\nWe show that every instance of our game has an equilibrium, and that in every such equilibrium, the allocation has value \u03b8\u2217 145 in other words, it achieves the best value possible.\nThus, equilibria in this model are always efficient, in that the market enables the right set of people to get the good, subject to the network constraints.\nWe establish the existence and efficiency of equilibria by constructing a linear program to capture the flow of goods through the network; the dual of this linear program contains enough information to extract equilibrium prices.\nBy the definition of the game, the value of the equilibrium allocation is divided up as payoffs to the agents, and it is interesting to ask how this value is distributed - in particular how much profit a trader is able to make based on its position in the network.\nWe find that, although all equilibria have the same value, a given trader``s payoff can vary across different equilibria.\nHowever, we are able to characterize the maximum and minimum amounts that a given trader is able to make, where these maxima and minima are taken over all equilibria, and we give an efficient algorithm to compute this.\nIn particular, our results here imply a clean combinatorial characterization of when a given trader t can achieve non-zero payoff: this occurs if and only there is some edge e incident to t that is essential, in the sense that deleting e reduces the value of the optimal allocation \u03b8\u2217 .\nWe also obtain results for the sum of all trader profits.\nRelated Work.\nThe standard baseline approach for analyzing the interaction of buyers and sellers is the Walrasian model in which anonymous buyers and sellers trade a good at a single market clearing price.\nThis reduced form of trade, built on the idealization of a market price, is a powerful model which has led to many insights.\nBut it is not a good model to use to examine where prices come from or exactly how buyers and sellers and trade with each other.\nThe difficulty is that in the Walrasian model there is no agent who sets the price, and agents don``t actually trade with each other.\nIn fact there is no market, in the everyday sense of that word, in the Walrasian model.\nThat is, there is no physical or virtual place where buyers and sellers interact to trade and set prices.\nThus in this simple model, all buyers and sellers are uniform and trade at the same price, and there is also no role for intermediaries.\nThere are several literatures in economics and finance which examine how prices are set rather than just determining equilibrium prices.\nThe literature on imperfect competition is perhaps the oldest of these.\nHere a monopolist, or a group of oliogopolists, choose prices in order to maximize their profits (see [14] for the standard textbook treatment of these markets).\nA monopolist uses its knowledge of market demand to choose a price, or a collection of prices if it discriminates.\nOliogopolists play a game in which their payoffs depend on market demand and the actions of their competitors.\nIn this literature there are agents who set prices, but the fiction of a single market is maintained.\nIn the equilibrium search literature, firms set prices and consumers search over them (see [3]).\nConsumers do end up paying different prices, but all consumers have access to all firms and there are no intermediaries.\nIn the general equilibrium literature there have been various attempts to introduce price determination.\nA standard proof technique for the existence of competitive equilibrium involves a price adjustment mechanism in which prices respond to excess demand.\nThe Walrasian auctioneer is often introduced as a device to explain how this process works, but this is a fundamentally a metaphor for an iterative priceupdating algorithm, not for the internals of an actual market.\nMore sophisticated processes have been introduced to study the stability of equilibrium prices or the information necessary to compute them.\nBut again there are no price-setting agents here.\nIn the finance literature the work on market microstructure does have price-setting agents (specialists), parts of it do determine separate bid and ask prices, and different agents receive different prices for the same asset (see [12] for a treatment of microstructure theory).\nWork in information economics has identified similar phenomena (see e.g. [7]).\nBut there is little research in these literatures examining the effect of restrictions on who can trade with whom.\nThere have been several approaches to studying how network structure determines prices.\nThese have posited price determination through definitions based on competitive equilibrium or the core, or through the use of truthful mechanisms.\nIn briefly reviewing this work, we will note the contrast with our approach, in that we model prices as arising from the strategic behavior of agents in the system.\nIn recent work, Kakade et al. [8] have studied the distribution of prices at competitive equilibrium in a bipartite graph on buyers and sellers, generated using a probabilistic model capable of producing heavy-tailed degree distributions [11].\nEven-Dar et al. [6] build on this to consider the strategic aspects of network formation when prices arise from competitive equilibrium.\nLeonard [10], Babaioff et al. [1], and Chu and Shen [4] consider an approach based on mechanism design: buyers and sellers reside at different nodes in a graph, and they incur a given transportation cost to trade with one another.\nLeonard studies VCG prices in this setting; Babaioff et al. and Chu and Shen additionally provide a a budget-balanced mechanism.\nSince the concern here is with truthful mechanisms that operate on private valuations, there is an inherent trade-off between the efficiency of the allocation and the budget-balance condition.\nIn contrast, our model has known valuations and prices arising from the strategic behavior of traders.\nThus, the assumptions behind our model are in a sense not directly comparable to those underlying the mechanism design approach: while we assume known valuations, we do not require a centralized authority to impose a mechanism.\nRather, price-setting is part of the strategic outcome, as in the real markets that motivate our work, and our equilibria are simultaneously budget-balanced and efficient - something not possible in the mechanism design frameworks that have been used.\nDemange, Gale, and Sotomayor [5], and Kranton and Minehart [9], analyze the prices at which trade occurs in a network, working within the framework of mechanism design.\nKranton and Minehart use a bipartite graph with direct links between buyers and sellers, and then use an ascending auction mechanism, rather than strategic intermediaries, to determine the prices.\nTheir auction has desirable equilibrium properties but as Kranton and Minehart note it is an abstraction of how goods are allocated and prices are determined that is similar in spirit to the Walrasian auctioneer abstraction.\nIn fact, we can show how the basic model of Kranton and Minehart can be encoded as an instance of our game, with traders producing prices at equilibrium matching the prices produced by their auction mechanism.1 Finally, the classic results of Shapley and Shubik [13] on the assignment game can be viewed as studying the result of trade on a bipartite graph in terms of the core.\nThey study the dual of a linear program based on the matching problem, similar to what we use for a reduced version of our model in the next section, but their focus is different as they do not consider agents that seek to set prices.\n2.\nMARKETS WITH PAIR-TRADERS For understanding the ideas behind the analysis of the general model, it is very useful to first consider a special case with a re1 Kranton and Minehart, however, can also analyze a more general setting in which buyers values are private and thus buyers and sellers play a game of incomplete information.\nWe deal only with complete information.\n146 stricted form of traders that we refer to as pair-traders.\nIn this case, each trader is connected to just one buyer and one seller.\n(Thus, it essentially serves as a trade route between the two.)\nThe techniques we develop to handle this case will form a useful basis for reasoning about the case of traders that may be connected arbitrarily to the sellers and buyers.\nWe will relate profits in a subgame perfect Nash equilibrium to optimal solutions of a certain linear program, use this relation to show that all equilibria result in efficient allocation of the goods, and show that a pure equilibrium always exists.\nFirst, we consider the simplest model where sellers have indistinguishable items, and each buyer is interested in getting one item.\nThen we extend the results to the more general case of a matching market, as discussed in the previous section, where valuations depend on the identity of the seller and buyer.\nWe then characterize the minimum and maximum profits traders can make.\nIn the next section, we extend the results to traders that may be connected to any subset of sellers and buyers.\nGiven that we are working with pair-traders in this section, we can represent the problem using a bipartite graph G whose node set is B \u222a S, and where each trader t, connecting seller i and buyer j, appears as an edge t = (i, j) in G. Note, however, that we allow multiple traders to connect the same pair of agents.\nFor each buyer and seller i, we will use adj(i) to denote the set of traders who can trade with i. 2.1 Indistinguishable Goods The socially optimal trade for the case of indistinguishable goods is the solution of the transportation problem: sending goods along the edges representing the traders.\nThe edges along which trade occurs correspond to a matching in this bipartite graph, and the optimal trade is described by the following linear program.\nmax SV (x) = X t\u2208T :t=(i,j) xt(\u03b8j \u2212 \u03b8i) xt \u2265 0 \u2200t \u2208 T X t\u2208adj(i) xt \u2264 1 \u2200i \u2208 S X t\u2208adj(j) xt \u2264 1 \u2200j \u2208 B Next we consider an equilibrium.\nEach trader t = (i, j) must offer a bid \u03b2t and an ask \u03b1t.\n(We omit the subscript denoting the seller and buyer here since we are dealing with pair-traders.)\nGiven the bid and ask price, the agents react to these prices, as described earlier.\nInstead of focusing on prices, we will focus on profits.\nIf a seller i sells to a trader t \u2208 adj(i) with bid \u03b2t then his profit is pi = \u03b2t \u2212 \u03b8i.\nSimilarly, if a buyer j buys from a trader t \u2208 adj(j) with ask \u03b1t, then his profit is pj = \u03b8j \u2212 \u03b1t.\nFinally, if a trader t trades with ask \u03b1t and bid \u03b2t then his profit is yt = \u03b1t \u2212 \u03b2t.\nAll agents not involved in trade make 0 profit.\nWe will show that the profits at equilibrium are an optimal solution to the following linear program.\nmin sum(p, y) = X i\u2208B\u222aS pi + X t\u2208T yt yt \u2265 0 \u2200t \u2208 T : pi \u2265 0 \u2200i \u2208 S \u222a B : yt \u2265 (\u03b8j \u2212 pj) \u2212 (\u03b8i + pi) \u2200t = (i, j) \u2208 T LEMMA 2.1.\nAt equilibrium the profits must satisfy the above inequalities.\nProof.\nClearly all profits are nonnegative, as trading is optional for all agents.\nTo see why the last set of inequalities holds, consider two cases separately.\nFor a trader t who conducted trade, we get equality by definition.\nFor other traders t = (i, j), the value pi +\u03b8i is the price that seller i sold for (or \u03b8i if seller i decided to keep the good).\nOffering a bid \u03b2t > pi + \u03b8i would get the seller to sell to trader t. Similarly, \u03b8j \u2212 pj is the price that buyer j bought for (or \u03b8j if he didn``t buy), and for any ask \u03b1t < \u03b8j \u2212 pj, the buyer will buy from trader t.\nSo unless \u03b8j \u2212 pj \u2264 \u03b8i + pi the trader has a profitable deviation.\nNow we are ready to prove our first theorem: THEOREM 2.2.\nIn any equilibrium the trade is efficient.\nProof.\nLet x be a flow of goods resulting in an equilibrium, and let variables p and y be the profits.\nConsider the linear program describing the socially optimal trade.\nWe will also add a set of additional constraints xt \u2264 1 for all traders t \u2208 T; this can be added to the description, as it is implied by the other constraints.\nNow we claim that the two linear programs are duals of each other.\nThe variables pi for agents B \u222a S correspond to the equations P t\u2208adj(i) xt \u2264 1.\nThe additional dual variable yt corresponds to an additional inequality xt \u2264 1.\nThe optimality of the social value of the trade will follow from the claim that the solution of these two linear programs derived from an equilibrium satisfy the complementary slackness conditions for this pair of linear programs, and hence both x and (p, y) are optimal solutions to the corresponding linear programs.\nThere are three different complementary slackness conditions we need to consider, corresponding to the three sets of variables x, y and p. Any agent can only make profit if he transacts, so pi > 0 implies P t\u2208adj(i) xt = 1, and similarly, yt > 0 implies that xt = 1 also.\nFinally, consider a trader t with xt > 0 that trades between seller i and buyer j, and recall that we have seen above that the inequality yt \u2265 (\u03b8j \u2212 pj) \u2212 (\u03b8i + pi) is satisfied with equality for those who trade.\nNext we argue that equilibria always exist.\nTHEOREM 2.3.\nFor any efficient trade between buyers and sellers there is a pure equilibrium of bid-ask values that supports this trade.\nProof.\nConsider an efficient trade; let xt = 1 if t trades and 0 otherwise; and consider an optimal solution (p, y) to the dual linear program.\nWe would like to claim that all dual solutions correspond to equilibrium prices, but unfortunately this is not exactly true.\nBefore we can convert a dual solution to equilibrium prices, we may need to modify the solution slightly as follows.\nConsider any agent i that is only connected to a single trader t. Because the agent is only connected to a single trader, the variables yt and pi are dual variables corresponding to the same primal inequality xt \u2264 1, and they always appear together as yt + pi in all inequalities, and also in the objective function.\nThus there is an optimal solution in which pi = 0 for all agents i connected only to a single trader.\nAssume (p, y) is a dual solution where agents connected only to one trader have pi = 0.\nFor a seller i, let \u03b2t = \u03b8i + pi be the bid for all traders t adjacent to i. Similarly, for each buyer j, let \u03b1t = \u03b8j \u2212 pj be the ask for all traders t adjacent to j.\nWe claim that this set of bids and asks, together with the trade x, are an equilibrium.\nTo see why, note that all traders t adjacent to a seller or buyer i offer the same ask or bid, and so trading with any trader is equally good for agent i. Also, if i is not trading in the solution 147 x then by complementary slackness pi = 0, and hence not trading is also equally good for i.\nThis shows that sellers and buyers don``t have an incentive to deviate.\nWe need to show that traders have no incentive to deviate either.\nWhen a trader t is trading with seller i and buyer j, then profitable deviations would involve increasing \u03b1t or decreasing \u03b2t.\nBut by our construction (and assumption about monopolized agents) all sellers and buyers have multiple identical ask/bid offers, or trade is occurring at valuation.\nIn either case such a deviation cannot be successful.\nFinally, consider a trader t = (i, j) who doesn``t trade.\nA deviation for t would involve offering a lower ask to seller i and a higher bid to seller j than their current trade.\nHowever, yt = 0 by complementary slackness, and hence pi + \u03b8i \u2265 \u03b8j \u2212 pj, so i sells for a price at least as high as the price at which j buys, so trader t cannot create profitable trade.\nNote that a seller or buyer i connected to a single trader t cannot have profit at equilibrium, so possible equilibrium profits are in one-to-one correspondence with dual solutions for which pi = 0 whenever i is monopolized by one trader.\nA disappointing feature of the equilibrium created by this proof is that some agents t may have to create ask-bid pairs where \u03b2t > \u03b1t, offering to buy for more than the price at which they are willing to sell.\nAgents that make such crossing bid-ask pairs never actually perform a trade, so it does not result in negative profit for the agent, but such pairs are unnatural.\nCrossing bid-ask pairs are weakly dominated by the strategy of offering a low bid \u03b2 = 0 and an extremely high ask to guarantee that neither is accepted.\nTo formulate a way of avoiding such crossing pairs, we say an equilibrium is cross-free if \u03b1t \u2265 \u03b2t for all traders t.\nWe now show there is always a cross-free equilibrium.\nTHEOREM 2.4.\nFor any efficient trade between buyers and sellers there is a pure cross-free equilibrium.\nProof.\nConsider an optimal solution to the dual linear program.\nTo get an equilibrium without crossing bids, we need to do a more general modification than just assuming that pi = 0 for all sellers and buyers connected to only a single trader.\nLet the set E be the set of edges t = (i, j) that are tight, in the sense that we have the equality yt = (\u03b8j \u2212 pj) \u2212 (\u03b8i + pi).\nThis set E contain all the edges where trade occurs, and some more edges.\nWe want to make sure that pi = 0 for all sellers and buyers that have degree at most 1 in E. Consider a seller i that has pi > 0.\nWe must have i involved in a trade, and the edge t = (i, j) along which the trade occurs must be tight.\nSuppose this is the only tight edge adjacent to agent i; then we can decrease pi and increase yt till one of the following happens: either pi = 0 or the constraint of some other agent t \u2208 adj(i) becomes tight.\nThis change only increases the set of tight edges E, keeps the solution feasible, and does not change the objective function value.\nSo after doing this for all sellers, and analogously changing yt and pj for all buyers, we get an optimal solution where all sellers and buyers i either have pi = 0 or have at least two adjacent tight edges.\nNow we can set asks and bids to form a cross-free equilibrium.\nFor all traders t = (i, j) associated with an edge t \u2208 E we set \u03b1t and \u03b2t as before: we set the bid \u03b2t = pi + \u03b8i and the ask \u03b1t = \u03b8j \u2212pj.\nFor a trader t = (i, j) \u2208 E we have that pi +\u03b8i > \u03b8j \u2212pj and we set \u03b1t = \u03b2t to be any value in the range [\u03b8j \u2212 pj, pi + \u03b8i].\nThis guarantees that for each seller or buyer the best sell or buy offer is along the edge where trade occurs in the solution.\nThe askbid values along the tight edges guarantee that traders who trade cannot increase their spread.\nTraders t = (i, j) who do not trade cannot make profit due to the constraint pi + \u03b8i \u2265 \u03b8j \u2212 pj 1 1 1 0 0 1 0 1 1 0 0 0 1 (a) No trader profit 1 1 1 0 x x x x 1 x x 0 x (b) Trader profit Figure 2: Left: an equilibrium with crossing bids where traders make no money.\nRight: an equilibrium without crossing bids for any value x \u2208 [0, 1].\nTotal trader profit ranges between 1 and 2.\n2.2 Distinguishable Goods We now consider the case of distinguishable goods.\nAs in the previous section, we can write a transshipment linear program for the socially optimal trade, with the only change being in the objective function.\nmax SV (x) = X t\u2208T :t=(i,j) xt(\u03b8ji \u2212 \u03b8ij) We can show that the dual of this linear program corresponds to trader profits.\nRecall that we needed to add the constraints xt \u2264 1 for all traders.\nThe dual is then: min sum(p, y) = X i\u2208B\u222aS pi + X t\u2208T yt yt \u2265 0 \u2200t \u2208 T : pi \u2265 0 \u2200i \u2208 S \u222a B : yt \u2265 (\u03b8ji \u2212 pj) \u2212 (\u03b8ij + pi) \u2200t = (i, j) \u2208 T It is not hard to extend the proofs of Theorems 2.2 - 2.4 to this case.\nProfits in an equilibrium satisfy the dual constraints, and profits and trade satisfy complementary slackness.\nThis shows that trade is socially optimal.\nTaking an optimal dual solution where pi = 0 for all agents that are monopolized, we can convert it to an equilibrium, and with a bit more care, we can also create an equilibrium with no crossing bid-ask pairs.\nTHEOREM 2.5.\nAll equilibria for the case of pair-traders with distinguishable goods result in socially optimal trade.\nPure noncrossing equilibria exist.\n2.3 Trader Profits We have seen that all equilibria are efficient.\nHowever, it turns out that equilibria may differ in how the value of the allocation is spread between the sellers, buyers and traders.\nFigure 2 depicts a simple example of this phenomenon.\nOur goal is to understand how a trader``s profit is affected by its position in the network; we will use the characterization we obtained to work out the range of profits a trader can make.\nTo maximize the profit of a trader t (or a subset of traders T ) all we need to do is to find an optimal solution to the dual linear program maximizing the value of yt (or the sum P t\u2208T yt).\nSuch dual solutions will then correspond to equilibria with non-crossing prices.\n148 THEOREM 2.6.\nFor any trader t or subset of traders T the maximum total profit they can make in any equilibrium can be computed in polynomial time.\nThis maximum profit can be obtained by a non-crossing equilibrium.\nOne way to think about the profit of a trader t = (i, j) is as a subtraction from the value of the corresponding edge (i, j).\nThe value of the edge is the social value \u03b8ji \u2212 \u03b8ij if the trader makes no profit, and decreases to \u03b8ji \u2212 \u03b8ij \u2212 yt if the trader t insists on making yt profit.\nTrader t gets yt profit in equilibrium, if after this decrease in the value of the edge, the edge is still included in the optimal transshipment.\nTHEOREM 2.7.\nA trader t can make profit in an equilibrium if and only if t is essential for the social welfare, that is, if deleting agent t decreases social welfare.\nThe maximum profit he can make is exactly his value to society, that is, the increase his presence causes in the social welfare.\nIf we allow crossing equilibria, then we can also find the minimum possible profit.\nRecall that in the proof of Theorem 2.3, traders only made money off of sellers or buyers that they have a monopoly over.\nAllowing such equilibria with crossing bids we can find the minimum profit a trader or set of traders can make, by minimizing the value yt (or sum P t\u2208T yt) over all optimal solutions that satisfy pi = 0 whenever i is connected to only a single trader.\nTHEOREM 2.8.\nFor any trader t or subset of traders T the minimum total profit they can make in any equilibrium can be computed in polynomial time.\n3.\nGENERAL TRADERS Next we extend the results to a model where traders may be connected to an arbitrary number of sellers and buyers.\nFor a trader t \u2208 T we will use S(t) and B(t) to denote the set of buyers and sellers connected to trader t.\nIn this section we focus on the general case when goods are distinguishable (i.e. both buyers and sellers have valuations that are sensitive to the identity of the agent they are paired with in the allocation).\nIn the full version of the paper we also discuss the special case of indistinguishable goods in more detail.\nTo get the optimal trade, we consider the bipartite graph G = (S \u222a B, E) connecting sellers and buyers where an edge e = (i, j) connects a seller i and a buyer j if there is a trader adjacent to both: E = {(i, j) : adj(i) \u2229 adj(j) = \u2205}.\nOn this graph, we then solve the instance of the assignment problem that was also used in Section 2.2, with the value of edge (i, j) equal to \u03b8ji \u2212 \u03b8ij (since the value of trading between i and j is independent of which trader conducted the trade).\nWe will also use the dual of this linear program: min val(z) = X i\u2208B\u222aS zi zi \u2265 0 \u2200i \u2208 S \u222a B. zi + zj \u2265 \u03b8ji \u2212 \u03b8ij \u2200i \u2208 S, j \u2208 B : adj(i) \u2229 adj(j) = \u2205.\n3.1 Bids and Asks and Trader Optimization First we need to understand what bidding model we will use.\nEven when goods are indistinguishable, a trader may want to pricediscriminate, and offer different bid and ask values to different sellers and buyers.\nIn the case of distinguishable goods, we have to deal with a further complication: the trader has to name the good she is proposing to sell or buy, and can possibly offer multiple different products.\nThere are two variants of our model depending whether a trader makes a single bid or ask to a seller or buyer, or she offers a menu of options.\n(i) A trader t can offer a buyer j a menu of asks \u03b1tji, a vector of values for all the products that she is connected to, where \u03b1tji is the ask for the product of seller i. Symmetrically, a trader t can offer to each seller i a menu of bids \u03b2tij for selling to different buyers j. (ii) Alternatively, we can require that each trader t can make at most one ask to each seller and one bid for each buyer, and an ask has to include the product sold, and a bid has to offer a particular buyer to sell to.\nOur results hold in either model.\nFor notational simplicity we will use the menu option here.\nNext we need to understand the optimization problem of a trader t. Suppose we have bid and ask values for all other traders t \u2208 T, t = t.\nWhat are the best bid and ask offers trader t can make as a best response to the current set of bids and asks?\nFor each seller i let pi be the maximum profit seller i can make using bids by other traders, and symmetrically assume pj is the maximum profit buyer j can make using asks by other traders (let pi = 0 for any seller or buyer i who cannot make profit).\nNow consider a seller-buyer pair (i, j) that trader t can connect.\nTrader t will have to make a bid of at least \u03b2tij = \u03b8ij +pi to seller i and an ask of at most \u03b1tji = \u03b8ji \u2212pj to buyer j to get this trade, so the maximum profit she can make on this trade is vtij = \u03b1tji \u2212 \u03b2tij = \u03b8ji \u2212 pj \u2212 (\u03b8ij + pi).\nThe optimal trade for trader t is obtained by solving a matching problem to find the matching between the sellers S(t) and buyers B(t) that maximizes the total value vtij for trader t.\nWe will need the dual of the linear program of finding the trade of maximum profit for the trader t.\nWe will use qti as the dual variable associated with the constraint of seller or buyer i.\nThe dual is then the following problem.\nmin val(qt) = X i\u2208B(t)\u222aS(t) qti qti \u2265 0 \u2200i \u2208 S(t) \u222a B(t).\nqti + qtj \u2265 vtij \u2200i \u2208 S(t), j \u2208 B(t).\nWe view qti as the profit made by t from trading with seller or buyer i. Theorem 3.1 summarizes the above discussion.\nTHEOREM 3.1.\nFor a trader t, given the lowest bids \u03b2tij and highest asks \u03b1tji that can be accepted for sellers i \u2208 S(t) and buyers j \u2208 B(t), the best trade t can make is the maximum value matching between S(t) and B(t) with value vtij = \u03b1tji \u2212 \u03b2tij for the edge (i, j).\nThis maximum value is equal to the minimum of the dual linear program above.\n3.2 Efficient Trade and Equilibrium Now we can prove trade at equilibrium is always efficient.\nTHEOREM 3.2.\nEvery equilibrium results in an efficient allocation of the goods.\nProof.\nConsider an equilibrium, with xe = 1 if and only if trade occurs along edge e = (i, j).\nTrade is a solution to the transshipment linear program used in Section 2.2.\nLet pi denote the profit of seller or buyer i. Each trader t currently has the best solution to his own optimization problem.\nA trader t finds his optimal trade (given bids and asks by all other 149 traders) by solving a matching problem.\nLet qti for i \u2208 B(t)\u222aS(t) denote the optimal dual solution to this matching problem as described by Theorem 3.1.\nWhen setting up the optimization problem for a trader t above, we used pi to denote the maximum profit i can make without the offer of trader t. Note that this pi is exactly the same pi we use here, the profit of agent i.\nThis is clearly true for all traders t that are not trading with i in the equilibrium.\nTo see why it is true for the trader t that i is trading with we use that the current set of bid-ask values is an equilibrium.\nIf for any agent i the bid or ask of trader t were the unique best option, then t could extract more profit by offering a bit larger ask or a bit smaller bid, a contradiction.\nWe show the trade x is optimal by considering the dual solution zi = pi + P t qti for all agents i \u2208 B \u222a S.\nWe claim z is a dual solution, and it satisfies complementary slackness with trade x. To see this we need to show a few facts.\nWe need that zi > 0 implies that i trades.\nIf zi > 0 then either pi > 0 or qti > 0 for some trader t. Agent i can only make profit pi > 0 if he is involved in a trade.\nIf qti > 0 for some t, then trader t must trade with i, as his solution is optimal, and by complementary slackness for the dual solution, qti > 0 implies that t trades with i. For an edge (i, j) associated with a trader t we need to show the dual solution is feasible, that is zi + zj \u2265 \u03b8ji \u2212 \u03b8ij .\nRecall vtij = \u03b8ji \u2212pj \u2212(\u03b8ij +pi), and the dual constraint of the trader``s optimization problem requires qti + qtj \u2265 vtij.\nPutting these together, we have zi + zj \u2265 pi + qti + pj + qtj \u2265 vtij + pi + pj = \u03b8ji \u2212 \u03b8ij .\nFinally, we need to show that the trade variables x also satisfy the complementary slackness constraint: when xe > 0 for an edge e = (i, j) then the corresponding dual constraint is tight.\nLet t be the trader involved in the trade.\nBy complementary slackness of t``s optimization problem we have qti + qtj = vtij.\nTo see that z satisfies complementary slackness we need to argue that for all other traders t = t we have both qt i = 0 and qt j = 0.\nThis is true as qt i > 0 implies by complementary slackness of t ``s optimization problem that t must trade with i at optimum, and t = t is trading.\nNext we want to show that a non-crossing equilibrium always exists.\nWe call an equilibrium non-crossing if the bid-ask offers a trader t makes for a seller-buyer pair (i, j) never cross, that is \u03b2tij \u2264 \u03b1tji for all t, i, j. THEOREM 3.3.\nThere exists a non-crossing equilibrium supporting any socially optimal trade.\nProof.\nConsider an optimal trade x and a dual solution z as before.\nTo find a non-crossing equilibrium we need to divide the profit zi between i and the trader t trading with i.\nWe will use qti as the trader t``s profit associated with agent i for any i \u2208 S(t) \u222a B(t).\nWe will need to guarantee the following properties: Trader t trades with agent i whenever qti > 0.\nThis is one of the complementary slackness conditions to make sure the current trade is optimal for trader t. For all seller-buyer pairs (i, j) that a trader t can trade with, we have pi + qti + pj + qtj \u2265 \u03b8ji \u2212 \u03b8ij , (1) which will make sure that qt is a feasible dual solution for the optimization problem faced by trader t.\nWe need to have equality in (1) when trader t is trading between i and j.\nThis is one of the complementary slackness conditions for trader t, and will ensure that the trade of t is optimal for the trader.\nFinally, we want to arrange that each agent i with pi > 0 has multiple offers for making profit pi, and the trade occurs at one of his best offers.\nTo guarantee this in the corresponding bids and asks we need to make sure that whenever pi > 0 there are multiple t \u2208 adj(i) that have equation in the above constraint (1).\nWe start by setting pi = zi for all i \u2208 S \u222a B and qti = 0 for all i \u2208 S \u222a B and traders t \u2208 adj(i).\nThis guarantees all invariants except the last property about multiple t \u2208 adj(t) having equality in (1).\nWe will modify p and q to gradually enforce the last condition, while maintaining the others.\nConsider a seller with pi > 0.\nBy optimality of the trade and dual solution z, seller i must trade with some trader t, and that trader will have equality in (1) for the buyer j that he matches with i.\nIf this is the only trader t that has a tight constraint in (1) involving seller i then we increase qti and decrease pi till either pi = 0 or another trader t = t will be achieve equality in (1) for some buyer edge adjacent to i (possibly a different buyer j ).\nThis change maintains all invariants, and increases the set of sellers that also satisfy the last constraint.\nWe can do a similar change for a buyer j that has pj > 0 and has only one trader t with a tight constraint (1) adjacent to j.\nAfter possibly repeating this for all sellers and buyers, we get profits satisfying all constraints.\nNow we get equilibrium bid and ask values as follows.\nFor a trader t that has equality for the seller-buyer pair (i, j) in (1) we offer \u03b1tji = \u03b8ji \u2212 pj and \u03b2tij = \u03b8ij + pi.\nFor all other traders t and seller-buyer pairs (i, j) we have the invariant (1), and using this we know we can pick a value \u03b3 in the range \u03b8ij +pi+qti \u2265 \u03b3 \u2265 \u03b8ji \u2212 (pj + qtj ).\nWe offer bid and ask values \u03b2tij = \u03b1tji = \u03b3.\nNeither the bid nor the ask will be the unique best offer for the buyer, and hence the trade x remains an equilibrium.\n3.3 Trader Profits Finally we turn to the goal of understanding, in the case of general traders, how a trader``s profit is affected by its position in the network.\nFirst, we show how to maximize the total profit of a set of traders.\nThe profit of trader t in an equilibrium is P i qti.\nTo find the maximum possible profit for a trader t or a set of traders T , we need to do the following: Find profits pi \u2265 0 and qti > 0 so that zi = pi + P t\u2208adj(i) qti is an optimal dual solution, and also satisfies the constraints (1) for any seller i and buyer j connected through a trader t \u2208 T. Now, subject to all these conditions, we maximize the sum P t\u2208T P i\u2208S(t)\u222aB(t) qti.\nNote that this maximization is a secondary objective function to the primary objective that z is an optimal dual solution.\nThen we use the proof of Theorem 3.3 shows how to turn this into an equilibrium.\nTHEOREM 3.4.\nThe maximum value for P t\u2208T P i qti above is the maximum profit the set T of traders can make.\nProof.\nBy the proof of Theorem 3.2 the profits of trader t can be written in this form, so the set of traders T cannot make more profit than claimed in this theorem.\nTo see that T can indeed make this much profit, we use the proof of Theorem 3.3.\nWe modify that proof to start with profit vectors p and qt for t \u2208 T , and set qt = 0 for all traders t \u2208 T .\nWe verify that this starting solution satisfies the first three of the four required properties, and then we can follow the proof to make the fourth property true.\nWe omit the details of this in the present version.\nIn Section 2.3 we showed that in the case of pair traders, a trader t can make money if he is essential for efficient trade.\nThis is not 150 1 1 Figure 3: The top trader is essential for social welfare.\nYet the only equilibrium is to have bid and ask values equal to 0, and the trader makes no profit.\ntrue for the type of more general traders we consider here, as shown by the example in Figure 3.\nHowever, we still get a characterization for when a trader t can make a positive profit.\nTHEOREM 3.5.\nA trader t can make profit in an equilibrium if and only if there is a seller or buyer i adjacent to t such that the connection of trader t to agent i is essential for social welfarethat is, if deleting agent t from adj(i) decreases the value of the optimal allocation.\nProof.\nFirst we show the direction that if a trader t can make money there must be an agent i so that t``s connection to i is essential to social welfare.\nLet p, q be the profits in an equilibrium where t makes money, as described by Theorem 3.2 with P i\u2208S(t)\u222aB(t) qti > 0.\nSo we have some agent i with qti > 0.\nWe claim that the connection between agent i and trader t must be essential, in particular, we claim that social welfare must decrease by at least qti if we delete t from adj(t).\nTo see why note that decreasing the value of all edges of the form (i, j) associated with trader t by qti keeps the same trade optimum, as we get a matching dual solution by simply resetting qti to zero.\nTo see the opposite, assume deleting t from adj(t) decreases social welfare by some value \u03b3.\nAssume i is a seller (the case of buyers is symmetric), and decrease by \u03b3 the social value of each edge (i, j) for any buyer j such that t is the only agent connecting i and j. By assumption the trade is still optimal, and we let z be the dual solution for this matching.\nNow we use the same process as in the proof of Theorem 3.3 to create a non-crossing equilibrium starting with pi = zi for all i \u2208 S \u222aB, and qti = \u03b3, and all other q values 0.\nThis creates an equilibrium with non-crossing bids where t makes at least \u03b3 profit (due to trade with seller i).\nFinally, if we allow crossing equilibria, then we can find the minimum possible profit by simply finding a dual solution minimizing the dual variables associated with agents monopolized by some trader.\nTHEOREM 3.6.\nFor any trader t or subset of traders T , the minimum total profit they can make in any equilibrium can be computed in polynomial time.\n4.\nREFERENCES [1] M. Babaioff, N. Nisan, E. Pavlov.\nMechanisms for a Spatially Distributed Market.\nACM EC Conference, 2005.\n[2] C. Barrett, E. Mutambatsere.\nAgricultural markets in developing countries.\nThe New Palgrave Dictionary of Economics, 2nd edition, forthcoming.\n[3] Kenneth Burdett and Kenneth Judd.\nEquilibrium Price Disperison.\nEconometrica, 51/4, July 1983, 955-969.\n[4] L. Chu, Z.-J.\nShen.\nAgent Competition Double Auction Mechanism.\nManagement Science, 52/8, 2006.\n[5] G. Demange, D. Gale, M. Sotomayor.\nMulti-item auctions.\nJ. Political Econ.\n94(1986).\n[6] E. Even-Dar, M. Kearns, S. Suri.\nA Network Formation Game for Bipartite Exchange Economies.\nACM-SIAM Symp.\non Discrete Algorithms (SODA), 2007.\n[7] J. Kephart, J. Hanson, A. Greenwald.\nDynamic Pricing by Software Agents.\nComputer Networks, 2000.\n[8] S. Kakade, M. Kearns, L. Ortiz, R. Pemantle, S. Suri.\nEconomic Properties of Social Networks.\nNIPS 2004.\n[9] R. Kranton, D. Minehart.\nA Theory of Buyer-Seller Networks.\nAmerican Economic Review 91(3), June 2001.\n[10] H. Leonard.\nElicitation of Honest Preferences for the Assignment of Individuals to Positions.\nJ. Pol.\nEcon, 1983.\n[11] M. E. J. Newman.\nThe structure and function of complex networks.\nSIAM Review, 45:167-256, 2003.\n[12] M. O``Hara.\nMarket Microstructure Theory.\nBlackwell Publishers, Cambridge, MA, 1995.\n[13] L. Shapley M. Shubik, The Assignment Game I: The Core.\nIntl..\nJ. Game Theory 1/2 111-130, 1972.\n[14] Jean Tirole.\nThe Theory of Industrial Organization.\nThe MIT Press, Cambridge, MA, 1988.\n151", "lvl-3": "Trading Networks with Price-Setting Agents\nABSTRACT\nIn a wide range of markets , individual buyers and sellers often trade through intermediaries , who determine prices via strategic considerations .\nTypically , not all buyers and sellers have access to the same intermediaries , and they trade at correspondingly different prices that reflect their relative amounts of power in the market .\nWe model this phenomenon using a game in which buyers , sellers , and traders engage in trade on a graph that represents the access each buyer and seller has to the traders .\nIn this model , traders set prices strategically , and then buyers and sellers react to the prices they are offered .\nWe show that the resulting game always has a subgame perfect Nash equilibrium , and that all equilibria lead to an efficient ( i.e. socially optimal ) allocation of goods .\nWe extend these results to a more general type of matching market , such as one finds in the matching of job applicants and employers .\nFinally , we consider how the profits obtained by the traders depend on the underlying graph -- roughly , a trader can command a positive profit if and only if it has an `` essential '' connection in the network structure , thus providing a graph-theoretic basis for quantifying the amount of competition among traders .\nOur work differs from recent studies of how price is affected by network structure through our modeling of price-setting as a strategic activity carried out by a subset of agents in the system , rather than studying prices set via competitive equilibrium or by a truthful mechanism .\n1 .\nINTRODUCTION\nIn a range of settings where markets mediate the interactions of buyers and sellers , one observes several recurring properties : Individual buyers and sellers often trade through intermediaries , not all buyers and sellers have access to the same intermediaries , and not all buyers and sellers trade at the same price .\nOne example of this setting is the trade of agricultural goods in developing countries .\nGiven inadequate transportation networks , and poor farmers ' limited access to capital , many farmers have no alternative to trading with middlemen in inefficient local markets .\nA developing country may have many such partially overlapping markets existing alongside modern efficient markets [ 2 ] .\nFinancial markets provide a different example of a setting with these general characteristics .\nIn these markets much of the trade between buyers and sellers is intermediated by a variety of agents ranging from brokers to market makers to electronic trading systems .\nFor many assets there is no one market ; trade in a single asset may occur simultaneously on the floor of an exchange , on crossing networks , on electronic exchanges , and in markets in other countries .\nSome buyers and sellers have access to many or all of these trading venues ; others have access to only one or a few of them .\nThe price at which the asset trades may differ across these trading venues .\nIn fact , there is no `` price '' as different traders pay or receive different prices .\nIn many settings there is also a gap between the price a buyer pays for an asset , the ask price , and the price a seller receives for the asset , the bid price .\nOne of the most striking examples of this phenomenon occurs in the market for foreign exchange , where there is an interbank market with restricted access and a retail market with much more open access .\nSpreads , defined as the difference between bid and ask prices , differ significantly across these markets , even though the same asset is being traded in the two markets .\nIn this paper , we develop a framework in which such phenomena emerge from a game-theoretic model of trade , with buyers , sellers , and traders interacting on a network .\nThe edges of the network connect traders to buyers and sellers , and thus represent the access that different market participants have to one another .\nThe traders serve as intermediaries in a two-stage trading game : they strategically choose bid and ask prices to offer to the sellers and buyers they are connected to ; the sellers and buyers then react to the prices they face .\nThus , the network encodes the relative power in the structural positions of the market participants , including the implicit levels of competition among traders .\nWe show that this game always has a\nsubgame perfect Nash equilibrium , and that all equilibria lead to an efficient ( i.e. socially optimal ) allocation of goods .\nWe also analyze how trader profits depend on the network structure , essentially characterizing in graph-theoretic terms how a trader 's payoff is determined by the amount of competition it experiences with other traders .\nOur work here is connected to several lines of research in economics , finance , and algorithmic game theory , and we discuss these connections in more detail later in the introduction .\nAt a general level , our approach can be viewed as synthesizing two important strands of work : one that treats buyer-seller interaction using network structures , but without attempting to model the processses by which prices are actually formed [ 1 , 4 , 5 , 6 , 8 , 9 , 10 , 13 ] ; and another strand in the literature on market microstructure that incorporates price-setting intermediaries , but without network-type constraints on who can trade with whom [ 12 ] .\nBy developing a network model that explicitly includes traders as price-setting agents , in a system together with buyers and sellers , we are able to capture price formation in a network setting as a strategic process carried out by intermediaries , rather than as the result of a centrally controlled or exogenous mechanism .\nThe Basic Model : Indistinguishable Goods .\nOur goal in formulating the model is to express the process of price-setting in markets such as those discussed above , where the participants do not all have uniform access to one another .\nWe are given a set B of buyers , a set S of sellers , and a set T of traders .\nThere is an undirected graph G that indicates who is able to trade with whom .\nAll edges have one end in B U S and the other in T ; that is , each edge has the form ( i , t ) for i E S and t E T , or ( j , t ) for j E B and t E T .\nThis reflects the constraints that all buyer-seller transactions go through traders as intermediaries .\nIn the most basic version of the model , we consider identical goods , one copy of which is initially held by each seller .\nBuyers and sellers each have a value for one copy of the good , and we assume that these values are common knowledge .\nWe will subsequently generalize this to a setting in which goods are distinguishable , buyers can value different goods differently , and potentially sellers can value transactions with different buyers differently as well .\nHaving different buyer valuations captures settings like house purchases ; adding different seller valuations as well captures matching markets -- for example , sellers as job applicants and buyers as employers , with both caring about who ends up with which `` good '' ( and with traders acting as services that broker the job search ) .\nThus , to start with the basic model , there is a single type of good ; the good comes in individisible units ; and each seller initially holds one unit of the good .\nAll three types of agents value money at the same rate ; and each i E B U S additionally values one copy of the good at \u03b8i units of money .\nNo agent wants more than one copy of the good , so additional copies are valued at 0 .\nEach agent has an initial endowment of money that is larger than any individual valuation \u03b8i ; the effect of this is to guarantee that any buyer who ends up without a copy of the good has been priced out of the market due to its valuation and network position , not a lack of funds .\nWe picture each good that is sold flowing along a sequence of two edges : from a seller to a trader , and then from the trader to a buyer .\nThe particular way in which goods flow is determined by the following game .\nFirst , each trader offers a bid price to each seller it is connected to , and an ask price to each buyer it is connected to .\nSellers and buyers then choose from among the offers presented to them by traders .\nIf multiple traders propose the same price to a seller or buyer , then there is no strict best response for the seller or buyer .\nIn this case a selection must be made , and , as is standard ( see for example [ 10 ] ) , we ( the modelers ) choose among the best offers .\nFinally , each trader buys a copy of the good from each seller that accepts its offer , and it sells a copy of the good to each buyer that accepts its offer .\nIf a particular trader t finds that more buyers than sellers accept its offers , then it has committed to provide more copies of the good than it has received , and we will say that this results in a large penalty to the trader for defaulting ; the effect of this is that in equilibrium , no trader will choose bid and ask prices that result in a default .\nMore precisely , a strategy for each trader t is a specification of a bid price 3ti for each seller i to which t is connected , and an ask price \u03b1tj for each buyer j to which t is connected .\n( We can also handle a model in which a trader may choose not to make an offer to certain of its adjacent sellers or buyers . )\nEach seller or buyer then chooses at most one incident edge , indicating the trader with whom they will transact , at the indicated price .\n( The choice of a single edge reflects the facts that ( a ) sellers each initially have only one copy of the good , and ( b ) buyers each only want one copy of the good . )\nThe payoffs are as follows : For each seller i , the payoff from selecting trader t is 3ti , while the payoff from selecting no trader is \u03b8i .\n( In the former case , the seller receives 3ti units of money , while in the latter it keeps its copy of the good , which it values at \u03b8i . )\nFor each buyer j , the payoff from selecting trader t is \u03b8j -- \u03b1tj , whle the payoff from selecting no trader is 0 .\n( In the former case , the buyer receives the good but gives up \u03b1tj units of money . )\nFor each trader t , with accepted offers from sellers i1 , ... , is and buyers j1 , ... , jb , the payoff is Pr \u03b1tjr -- Pr 3tir , minus a penalty \u03c0 if b > s .\nThe penalty is chosen to be large enough that a trader will never incur it in equilibrium , and hence we will generally not be concerned with the penalty .\nThis defines the basic elements of the game .\nThe equilibrium concept we use is subgame perfect Nash equilibrium .\nSome Examples .\nTo help with thinking about the model , we now describe three illustrative examples , depicted in Figure 1 .\nTo keep the figures from getting too cluttered , we adopt the following conventions : sellers are drawn as circles in the leftmost column and will be named i1 , i2 , ... from top to bottom ; traders are drawn as squares in the middle column and will be named t1 , t2 , ... from top to bottom ; and buyers are drawn as circles in the rightmost column and will be named j1 , j2 , ... from top to bottom .\nAll sellers in the examples will have valuations for the good equal to 0 ; the valuation of each buyer is drawn inside its circle ; and the bid or ask price on each edge is drawn on top of the edge .\nIn Figure 1 ( a ) , we show how a standard second-price auction arises naturally from our model .\nSuppose the buyer valuations from top to bottom are w > x > y > z .\nThe bid and ask prices shown are consistent with an equilibrium in which i1 and j1 accept the offers of trader t1 , and no other buyer accepts the offer of its adjacent trader : thus , trader t1 receives the good with a bid price of x , and makes w -- x by selling the good to buyer j1 for w .\nIn this way , we can consider this particular instance as an auction for a single good in which the traders act as `` proxies '' for their adjacent buyers .\nThe buyer with the highest valuation for the good ends up with it , and the surplus is divided between the seller and the associated trader .\nNote that one can construct a k-unit auction with f > k buyers just as easily , by building a complete bipartite graph on k sellers and f traders , and then attaching each trader to a single distinct buyer .\nIn Figure 1 ( b ) , we show how nodes with different positions in the network topology can achieve different payoffs , even when all\nFigure 1 : ( a ) An auction , mediated by traders , in which the buyer with the highest valuation for the good ends up with it .\n( b )\nA network in which the middle seller and buyer benefit from perfect competition between the traders , while the other sellers and buyers have no power due to their position in the network .\n( c ) A form of implicit perfect competition : all bid/ask spreads will be zero in equilibrium , even though no trader directly `` competes '' with any other trader for the same buyer-seller pair .\nbuyer valuations are the same numerically .\nSpecifically , seller i2 and buyer j2 occupy powerful positions , because the two traders are competing for their business ; on the other hand , the other sellers and buyers are in weak positions , because they each have only one option .\nAnd indeed , in every equilibrium , there is a real number x E [ 0 , 1 ] such that both traders offer bid and ask prices of x to i2 and j2 respectively , while they offer bids of 0 and asks of 1 to the other sellers and buyers .\nThus , this example illustrates a few crucial ingredients that we will identify at a more general level shortly .\nSpecifically , i2 and j2 experience the benefits of perfect competition , in that the two traders drive the bid-ask spreads to 0 in competing for their business .\nOn the other hand , the other sellers and buyers experience the downsides of monopoly -- they receive 0 payoff since they have only a single option for trade , and the corresponding trader makes all the profit .\nNote further how this natural behavior emerges from the fact that traders are able to offer different prices to different agents -- capturing the fact that there is no one fixed `` price '' in the kinds of markets that motivate the model , but rather different prices reflecting the relative power of the different agents involved .\nThe previous example shows perhaps the most natural way in which a trader 's profit on a particular transaction can drop to 0 : when there is another trader who can replicate its function precisely .\n( In that example , two traders each had the ability to move a copy of the good from i2 to j2 . )\nBut as our subsequent results will show , traders make zero profit more generally due to global , graph-theoretic reasons .\nThe example in Figure 1 ( c ) gives an initial indication of this : one can show that for every equilibrium , there is a y E [ 0 , 1 ] such that every bid and every ask price is equal to y .\nIn other words , all traders make zero profit , whether or not a copy of the good passes through them -- and yet , no two traders have any seller-buyer paths in common .\nThe price spreads have been driven to zero by a global constraint imposed by the long cycle through all the agents ; this is an example of implicit perfect competition determined by the network topology .\nExtending the Model to Distinguishable Goods .\nWe extend the basic model to a setting with distinguishable goods , as follows .\nInstead of having each agent i E B U S have a single numerical valuation \u03b8i , we index valuations by pairs of buyers and sellers : if buyer j obtains the good initially held by seller i , it gets a utility of \u03b8ji , and if seller i sells its good to buyer j , it experiences a loss of utility of \u03b8ij .\nThis generalizes the case of indistinguishable goods , since we can always have these pairwise valuations depend only on one of the indices .\nA strategy for a trader now consists of offering a bid to each seller that specifies both a price and a buyer , and offering an ask to each buyer that specifies both a price and a seller .\n( We can also handle a model in which a trader offers bids ( respectively , asks ) in the form of vectors , essentially specifying a `` menu '' with a price attached to each buyer ( resp .\nseller ) . )\nEach buyer and seller selects an offer from an adjacent trader , and the payoffs to all agents are determined as before .\nThis general framework captures matching markets [ 10 , 13 ] : for example , a job market that is mediated by agents or employment search services ( as in hiring for corporate executives , or sports or entertainment figures ) .\nHere the sellers are job applicants , buyers are employers , and traders are the agents that mediate the job market .\nOf course , if one specifies pairwise valuations on buyers but just single valuations for sellers , we model a setting where buyers can distinguish among the goods , but sellers do n't care whom they sell to -- this ( roughly ) captures settings like housing markets .\nOur Results .\nOur results will identify general forms of some of the principles noted in the examples discussed above -- including the question of which buyers end up with the good ; the question of how payoffs are differently realized by sellers , traders , and buyers ; and the question of what structural properties of the network determine whether the traders will make positive profits .\nTo make these precise , we introduce the following notation .\nAny outcome of the game determines a final allocation of goods to some of the agents ; this can be specified by a collection M of triples ( ie , te , je ) , where ie E S , te E T , and je E B ; moreover , each seller and each buyer appears in at most one triple .\nThe meaning is for each e E M , the good initially held by ie moves to je through te .\n( Sellers appearing in no triple keep their copy of the good . )\nWe say that the value of the allocation is equal to Pe \u2208 M \u03b8jeie -- \u03b8ieje .\nLet \u03b8 \u2217 denote the maximum value of any allocation M that is feasible given the network .\nWe show that every instance of our game has an equilibrium , and that in every such equilibrium , the allocation has value \u03b8 \u2217 --\nin other words , it achieves the best value possible .\nThus , equilibria in this model are always efficient , in that the market enables the `` right '' set of people to get the good , subject to the network constraints .\nWe establish the existence and efficiency of equilibria by constructing a linear program to capture the flow of goods through the network ; the dual of this linear program contains enough information to extract equilibrium prices .\nBy the definition of the game , the value of the equilibrium allocation is divided up as payoffs to the agents , and it is interesting to ask how this value is distributed -- in particular how much profit a trader is able to make based on its position in the network .\nWe find that , although all equilibria have the same value , a given trader 's payoff can vary across different equilibria .\nHowever , we are able to characterize the maximum and minimum amounts that a given trader is able to make , where these maxima and minima are taken over all equilibria , and we give an efficient algorithm to compute this .\nIn particular , our results here imply a clean combinatorial characterization of when a given trader t can achieve non-zero payoff : this occurs if and only there is some edge e incident to t that is essential , in the sense that deleting e reduces the value of the optimal allocation \u03b8 \u2217 .\nWe also obtain results for the sum of all trader profits .\nRelated Work .\nThe standard baseline approach for analyzing the interaction of buyers and sellers is the Walrasian model in which anonymous buyers and sellers trade a good at a single market clearing price .\nThis reduced form of trade , built on the idealization of a market price , is a powerful model which has led to many insights .\nBut it is not a good model to use to examine where prices come from or exactly how buyers and sellers and trade with each other .\nThe difficulty is that in the Walrasian model there is no agent who sets the price , and agents do n't actually trade with each other .\nIn fact there is no market , in the everyday sense of that word , in the Walrasian model .\nThat is , there is no physical or virtual place where buyers and sellers interact to trade and set prices .\nThus in this simple model , all buyers and sellers are uniform and trade at the same price , and there is also no role for intermediaries .\nThere are several literatures in economics and finance which examine how prices are set rather than just determining equilibrium prices .\nThe literature on imperfect competition is perhaps the oldest of these .\nHere a monopolist , or a group of oliogopolists , choose prices in order to maximize their profits ( see [ 14 ] for the standard textbook treatment of these markets ) .\nA monopolist uses its knowledge of market demand to choose a price , or a collection of prices if it discriminates .\nOliogopolists play a game in which their payoffs depend on market demand and the actions of their competitors .\nIn this literature there are agents who set prices , but the fiction of a single market is maintained .\nIn the equilibrium search literature , firms set prices and consumers search over them ( see [ 3 ] ) .\nConsumers do end up paying different prices , but all consumers have access to all firms and there are no intermediaries .\nIn the general equilibrium literature there have been various attempts to introduce price determination .\nA standard proof technique for the existence of competitive equilibrium involves a price adjustment mechanism in which prices respond to excess demand .\nThe Walrasian auctioneer is often introduced as a device to explain how this process works , but this is a fundamentally a metaphor for an iterative priceupdating algorithm , not for the internals of an actual market .\nMore sophisticated processes have been introduced to study the stability of equilibrium prices or the information necessary to compute them .\nBut again there are no price-setting agents here .\nIn the finance literature the work on market microstructure does have price-setting agents ( specialists ) , parts of it do determine separate bid and ask prices , and different agents receive different prices for the same asset ( see [ 12 ] for a treatment of microstructure theory ) .\nWork in information economics has identified similar phenomena ( see e.g. [ 7 ] ) .\nBut there is little research in these literatures examining the effect of restrictions on who can trade with whom .\nThere have been several approaches to studying how network structure determines prices .\nThese have posited price determination through definitions based on competitive equilibrium or the core , or through the use of truthful mechanisms .\nIn briefly reviewing this work , we will note the contrast with our approach , in that we model prices as arising from the strategic behavior of agents in the system .\nIn recent work , Kakade et al. [ 8 ] have studied the distribution of prices at competitive equilibrium in a bipartite graph on buyers and sellers , generated using a probabilistic model capable of producing heavy-tailed degree distributions [ 11 ] .\nEven-Dar et al. [ 6 ] build on this to consider the strategic aspects of network formation when prices arise from competitive equilibrium .\nLeonard [ 10 ] , Babaioff et al. [ 1 ] , and Chu and Shen [ 4 ] consider an approach based on mechanism design : buyers and sellers reside at different nodes in a graph , and they incur a given transportation cost to trade with one another .\nLeonard studies VCG prices in this setting ; Babaioff et al. and Chu and Shen additionally provide a a budget-balanced mechanism .\nSince the concern here is with truthful mechanisms that operate on private valuations , there is an inherent trade-off between the efficiency of the allocation and the budget-balance condition .\nIn contrast , our model has known valuations and prices arising from the strategic behavior of traders .\nThus , the assumptions behind our model are in a sense not directly comparable to those underlying the mechanism design approach : while we assume known valuations , we do not require a centralized authority to impose a mechanism .\nRather , price-setting is part of the strategic outcome , as in the real markets that motivate our work , and our equilibria are simultaneously budget-balanced and efficient -- something not possible in the mechanism design frameworks that have been used .\nDemange , Gale , and Sotomayor [ 5 ] , and Kranton and Minehart [ 9 ] , analyze the prices at which trade occurs in a network , working within the framework of mechanism design .\nKranton and Minehart use a bipartite graph with direct links between buyers and sellers , and then use an ascending auction mechanism , rather than strategic intermediaries , to determine the prices .\nTheir auction has desirable equilibrium properties but as Kranton and Minehart note it is an abstraction of how goods are allocated and prices are determined that is similar in spirit to the Walrasian auctioneer abstraction .\nIn fact , we can show how the basic model of Kranton and Minehart can be encoded as an instance of our game , with traders producing prices at equilibrium matching the prices produced by their auction mechanism .1 Finally , the classic results of Shapley and Shubik [ 13 ] on the assignment game can be viewed as studying the result of trade on a bipartite graph in terms of the core .\nThey study the dual of a linear program based on the matching problem , similar to what we use for a reduced version of our model in the next section , but their focus is different as they do not consider agents that seek to set prices .\n2 .\nMARKETS WITH PAIR-TRADERS\n2.1 Indistinguishable Goods\nTHEOREM 2.2 .\nIn any equilibrium the trade is efficient .\n2.2 Distinguishable Goods\n2.3 Trader Profits\n3 .\nGENERAL TRADERS\n3.1 Bids and Asks and Trader Optimization\n3.2 Efficient Trade and Equilibrium\n3.3 Trader Profits\ntETi\ntETi", "lvl-4": "Trading Networks with Price-Setting Agents\nABSTRACT\nIn a wide range of markets , individual buyers and sellers often trade through intermediaries , who determine prices via strategic considerations .\nTypically , not all buyers and sellers have access to the same intermediaries , and they trade at correspondingly different prices that reflect their relative amounts of power in the market .\nWe model this phenomenon using a game in which buyers , sellers , and traders engage in trade on a graph that represents the access each buyer and seller has to the traders .\nIn this model , traders set prices strategically , and then buyers and sellers react to the prices they are offered .\nWe show that the resulting game always has a subgame perfect Nash equilibrium , and that all equilibria lead to an efficient ( i.e. socially optimal ) allocation of goods .\nWe extend these results to a more general type of matching market , such as one finds in the matching of job applicants and employers .\nFinally , we consider how the profits obtained by the traders depend on the underlying graph -- roughly , a trader can command a positive profit if and only if it has an `` essential '' connection in the network structure , thus providing a graph-theoretic basis for quantifying the amount of competition among traders .\nOur work differs from recent studies of how price is affected by network structure through our modeling of price-setting as a strategic activity carried out by a subset of agents in the system , rather than studying prices set via competitive equilibrium or by a truthful mechanism .\n1 .\nINTRODUCTION\nIn a range of settings where markets mediate the interactions of buyers and sellers , one observes several recurring properties : Individual buyers and sellers often trade through intermediaries , not all buyers and sellers have access to the same intermediaries , and not all buyers and sellers trade at the same price .\nOne example of this setting is the trade of agricultural goods in developing countries .\nGiven inadequate transportation networks , and poor farmers ' limited access to capital , many farmers have no alternative to trading with middlemen in inefficient local markets .\nA developing country may have many such partially overlapping markets existing alongside modern efficient markets [ 2 ] .\nFinancial markets provide a different example of a setting with these general characteristics .\nIn these markets much of the trade between buyers and sellers is intermediated by a variety of agents ranging from brokers to market makers to electronic trading systems .\nFor many assets there is no one market ; trade in a single asset may occur simultaneously on the floor of an exchange , on crossing networks , on electronic exchanges , and in markets in other countries .\nSome buyers and sellers have access to many or all of these trading venues ; others have access to only one or a few of them .\nThe price at which the asset trades may differ across these trading venues .\nIn fact , there is no `` price '' as different traders pay or receive different prices .\nIn many settings there is also a gap between the price a buyer pays for an asset , the ask price , and the price a seller receives for the asset , the bid price .\nSpreads , defined as the difference between bid and ask prices , differ significantly across these markets , even though the same asset is being traded in the two markets .\nIn this paper , we develop a framework in which such phenomena emerge from a game-theoretic model of trade , with buyers , sellers , and traders interacting on a network .\nThe edges of the network connect traders to buyers and sellers , and thus represent the access that different market participants have to one another .\nThe traders serve as intermediaries in a two-stage trading game : they strategically choose bid and ask prices to offer to the sellers and buyers they are connected to ; the sellers and buyers then react to the prices they face .\nThus , the network encodes the relative power in the structural positions of the market participants , including the implicit levels of competition among traders .\nWe show that this game always has a\nsubgame perfect Nash equilibrium , and that all equilibria lead to an efficient ( i.e. socially optimal ) allocation of goods .\nWe also analyze how trader profits depend on the network structure , essentially characterizing in graph-theoretic terms how a trader 's payoff is determined by the amount of competition it experiences with other traders .\nBy developing a network model that explicitly includes traders as price-setting agents , in a system together with buyers and sellers , we are able to capture price formation in a network setting as a strategic process carried out by intermediaries , rather than as the result of a centrally controlled or exogenous mechanism .\nThe Basic Model : Indistinguishable Goods .\nOur goal in formulating the model is to express the process of price-setting in markets such as those discussed above , where the participants do not all have uniform access to one another .\nWe are given a set B of buyers , a set S of sellers , and a set T of traders .\nThere is an undirected graph G that indicates who is able to trade with whom .\nThis reflects the constraints that all buyer-seller transactions go through traders as intermediaries .\nIn the most basic version of the model , we consider identical goods , one copy of which is initially held by each seller .\nBuyers and sellers each have a value for one copy of the good , and we assume that these values are common knowledge .\nWe will subsequently generalize this to a setting in which goods are distinguishable , buyers can value different goods differently , and potentially sellers can value transactions with different buyers differently as well .\nHaving different buyer valuations captures settings like house purchases ; adding different seller valuations as well captures matching markets -- for example , sellers as job applicants and buyers as employers , with both caring about who ends up with which `` good '' ( and with traders acting as services that broker the job search ) .\nThus , to start with the basic model , there is a single type of good ; the good comes in individisible units ; and each seller initially holds one unit of the good .\nAll three types of agents value money at the same rate ; and each i E B U S additionally values one copy of the good at \u03b8i units of money .\nNo agent wants more than one copy of the good , so additional copies are valued at 0 .\nEach agent has an initial endowment of money that is larger than any individual valuation \u03b8i ; the effect of this is to guarantee that any buyer who ends up without a copy of the good has been priced out of the market due to its valuation and network position , not a lack of funds .\nWe picture each good that is sold flowing along a sequence of two edges : from a seller to a trader , and then from the trader to a buyer .\nThe particular way in which goods flow is determined by the following game .\nFirst , each trader offers a bid price to each seller it is connected to , and an ask price to each buyer it is connected to .\nSellers and buyers then choose from among the offers presented to them by traders .\nIf multiple traders propose the same price to a seller or buyer , then there is no strict best response for the seller or buyer .\nFinally , each trader buys a copy of the good from each seller that accepts its offer , and it sells a copy of the good to each buyer that accepts its offer .\nIf a particular trader t finds that more buyers than sellers accept its offers , then it has committed to provide more copies of the good than it has received , and we will say that this results in a large penalty to the trader for defaulting ; the effect of this is that in equilibrium , no trader will choose bid and ask prices that result in a default .\nMore precisely , a strategy for each trader t is a specification of a bid price 3ti for each seller i to which t is connected , and an ask price \u03b1tj for each buyer j to which t is connected .\n( We can also handle a model in which a trader may choose not to make an offer to certain of its adjacent sellers or buyers . )\nEach seller or buyer then chooses at most one incident edge , indicating the trader with whom they will transact , at the indicated price .\n( The choice of a single edge reflects the facts that ( a ) sellers each initially have only one copy of the good , and ( b ) buyers each only want one copy of the good . )\nThe payoffs are as follows : For each seller i , the payoff from selecting trader t is 3ti , while the payoff from selecting no trader is \u03b8i .\n( In the former case , the seller receives 3ti units of money , while in the latter it keeps its copy of the good , which it values at \u03b8i . )\nFor each buyer j , the payoff from selecting trader t is \u03b8j -- \u03b1tj , whle the payoff from selecting no trader is 0 .\n( In the former case , the buyer receives the good but gives up \u03b1tj units of money . )\nFor each trader t , with accepted offers from sellers i1 , ... , is and buyers j1 , ... , jb , the payoff is Pr \u03b1tjr -- Pr 3tir , minus a penalty \u03c0 if b > s .\nThe penalty is chosen to be large enough that a trader will never incur it in equilibrium , and hence we will generally not be concerned with the penalty .\nThis defines the basic elements of the game .\nThe equilibrium concept we use is subgame perfect Nash equilibrium .\nSome Examples .\nTo help with thinking about the model , we now describe three illustrative examples , depicted in Figure 1 .\nAll sellers in the examples will have valuations for the good equal to 0 ; the valuation of each buyer is drawn inside its circle ; and the bid or ask price on each edge is drawn on top of the edge .\nIn Figure 1 ( a ) , we show how a standard second-price auction arises naturally from our model .\nSuppose the buyer valuations from top to bottom are w > x > y > z .\nThe bid and ask prices shown are consistent with an equilibrium in which i1 and j1 accept the offers of trader t1 , and no other buyer accepts the offer of its adjacent trader : thus , trader t1 receives the good with a bid price of x , and makes w -- x by selling the good to buyer j1 for w .\nIn this way , we can consider this particular instance as an auction for a single good in which the traders act as `` proxies '' for their adjacent buyers .\nThe buyer with the highest valuation for the good ends up with it , and the surplus is divided between the seller and the associated trader .\nNote that one can construct a k-unit auction with f > k buyers just as easily , by building a complete bipartite graph on k sellers and f traders , and then attaching each trader to a single distinct buyer .\nIn Figure 1 ( b ) , we show how nodes with different positions in the network topology can achieve different payoffs , even when all\nFigure 1 : ( a ) An auction , mediated by traders , in which the buyer with the highest valuation for the good ends up with it .\n( b )\nA network in which the middle seller and buyer benefit from perfect competition between the traders , while the other sellers and buyers have no power due to their position in the network .\n( c ) A form of implicit perfect competition : all bid/ask spreads will be zero in equilibrium , even though no trader directly `` competes '' with any other trader for the same buyer-seller pair .\nbuyer valuations are the same numerically .\nSpecifically , seller i2 and buyer j2 occupy powerful positions , because the two traders are competing for their business ; on the other hand , the other sellers and buyers are in weak positions , because they each have only one option .\nAnd indeed , in every equilibrium , there is a real number x E [ 0 , 1 ] such that both traders offer bid and ask prices of x to i2 and j2 respectively , while they offer bids of 0 and asks of 1 to the other sellers and buyers .\nThus , this example illustrates a few crucial ingredients that we will identify at a more general level shortly .\nSpecifically , i2 and j2 experience the benefits of perfect competition , in that the two traders drive the bid-ask spreads to 0 in competing for their business .\nOn the other hand , the other sellers and buyers experience the downsides of monopoly -- they receive 0 payoff since they have only a single option for trade , and the corresponding trader makes all the profit .\nNote further how this natural behavior emerges from the fact that traders are able to offer different prices to different agents -- capturing the fact that there is no one fixed `` price '' in the kinds of markets that motivate the model , but rather different prices reflecting the relative power of the different agents involved .\nThe previous example shows perhaps the most natural way in which a trader 's profit on a particular transaction can drop to 0 : when there is another trader who can replicate its function precisely .\n( In that example , two traders each had the ability to move a copy of the good from i2 to j2 . )\nBut as our subsequent results will show , traders make zero profit more generally due to global , graph-theoretic reasons .\nThe example in Figure 1 ( c ) gives an initial indication of this : one can show that for every equilibrium , there is a y E [ 0 , 1 ] such that every bid and every ask price is equal to y .\nIn other words , all traders make zero profit , whether or not a copy of the good passes through them -- and yet , no two traders have any seller-buyer paths in common .\nThe price spreads have been driven to zero by a global constraint imposed by the long cycle through all the agents ; this is an example of implicit perfect competition determined by the network topology .\nExtending the Model to Distinguishable Goods .\nWe extend the basic model to a setting with distinguishable goods , as follows .\nA strategy for a trader now consists of offering a bid to each seller that specifies both a price and a buyer , and offering an ask to each buyer that specifies both a price and a seller .\n( We can also handle a model in which a trader offers bids ( respectively , asks ) in the form of vectors , essentially specifying a `` menu '' with a price attached to each buyer ( resp .\nseller ) . )\nEach buyer and seller selects an offer from an adjacent trader , and the payoffs to all agents are determined as before .\nHere the sellers are job applicants , buyers are employers , and traders are the agents that mediate the job market .\nOf course , if one specifies pairwise valuations on buyers but just single valuations for sellers , we model a setting where buyers can distinguish among the goods , but sellers do n't care whom they sell to -- this ( roughly ) captures settings like housing markets .\nOur Results .\nTo make these precise , we introduce the following notation .\n( Sellers appearing in no triple keep their copy of the good . )\nWe say that the value of the allocation is equal to Pe \u2208 M \u03b8jeie -- \u03b8ieje .\nLet \u03b8 \u2217 denote the maximum value of any allocation M that is feasible given the network .\nWe show that every instance of our game has an equilibrium , and that in every such equilibrium , the allocation has value \u03b8 \u2217 --\nin other words , it achieves the best value possible .\nThus , equilibria in this model are always efficient , in that the market enables the `` right '' set of people to get the good , subject to the network constraints .\nWe establish the existence and efficiency of equilibria by constructing a linear program to capture the flow of goods through the network ; the dual of this linear program contains enough information to extract equilibrium prices .\nBy the definition of the game , the value of the equilibrium allocation is divided up as payoffs to the agents , and it is interesting to ask how this value is distributed -- in particular how much profit a trader is able to make based on its position in the network .\nWe find that , although all equilibria have the same value , a given trader 's payoff can vary across different equilibria .\nWe also obtain results for the sum of all trader profits .\nRelated Work .\nThe standard baseline approach for analyzing the interaction of buyers and sellers is the Walrasian model in which anonymous buyers and sellers trade a good at a single market clearing price .\nThis reduced form of trade , built on the idealization of a market price , is a powerful model which has led to many insights .\nBut it is not a good model to use to examine where prices come from or exactly how buyers and sellers and trade with each other .\nThe difficulty is that in the Walrasian model there is no agent who sets the price , and agents do n't actually trade with each other .\nIn fact there is no market , in the everyday sense of that word , in the Walrasian model .\nThat is , there is no physical or virtual place where buyers and sellers interact to trade and set prices .\nThus in this simple model , all buyers and sellers are uniform and trade at the same price , and there is also no role for intermediaries .\nThere are several literatures in economics and finance which examine how prices are set rather than just determining equilibrium prices .\nThe literature on imperfect competition is perhaps the oldest of these .\nHere a monopolist , or a group of oliogopolists , choose prices in order to maximize their profits ( see [ 14 ] for the standard textbook treatment of these markets ) .\nA monopolist uses its knowledge of market demand to choose a price , or a collection of prices if it discriminates .\nOliogopolists play a game in which their payoffs depend on market demand and the actions of their competitors .\nIn this literature there are agents who set prices , but the fiction of a single market is maintained .\nIn the equilibrium search literature , firms set prices and consumers search over them ( see [ 3 ] ) .\nConsumers do end up paying different prices , but all consumers have access to all firms and there are no intermediaries .\nIn the general equilibrium literature there have been various attempts to introduce price determination .\nA standard proof technique for the existence of competitive equilibrium involves a price adjustment mechanism in which prices respond to excess demand .\nMore sophisticated processes have been introduced to study the stability of equilibrium prices or the information necessary to compute them .\nBut again there are no price-setting agents here .\nIn the finance literature the work on market microstructure does have price-setting agents ( specialists ) , parts of it do determine separate bid and ask prices , and different agents receive different prices for the same asset ( see [ 12 ] for a treatment of microstructure theory ) .\nWork in information economics has identified similar phenomena ( see e.g. [ 7 ] ) .\nBut there is little research in these literatures examining the effect of restrictions on who can trade with whom .\nThere have been several approaches to studying how network structure determines prices .\nThese have posited price determination through definitions based on competitive equilibrium or the core , or through the use of truthful mechanisms .\nIn briefly reviewing this work , we will note the contrast with our approach , in that we model prices as arising from the strategic behavior of agents in the system .\nIn recent work , Kakade et al. [ 8 ] have studied the distribution of prices at competitive equilibrium in a bipartite graph on buyers and sellers , generated using a probabilistic model capable of producing heavy-tailed degree distributions [ 11 ] .\nEven-Dar et al. [ 6 ] build on this to consider the strategic aspects of network formation when prices arise from competitive equilibrium .\nLeonard studies VCG prices in this setting ; Babaioff et al. and Chu and Shen additionally provide a a budget-balanced mechanism .\nIn contrast , our model has known valuations and prices arising from the strategic behavior of traders .\nDemange , Gale , and Sotomayor [ 5 ] , and Kranton and Minehart [ 9 ] , analyze the prices at which trade occurs in a network , working within the framework of mechanism design .\nKranton and Minehart use a bipartite graph with direct links between buyers and sellers , and then use an ascending auction mechanism , rather than strategic intermediaries , to determine the prices .\nTheir auction has desirable equilibrium properties but as Kranton and Minehart note it is an abstraction of how goods are allocated and prices are determined that is similar in spirit to the Walrasian auctioneer abstraction .", "lvl-2": "Trading Networks with Price-Setting Agents\nABSTRACT\nIn a wide range of markets , individual buyers and sellers often trade through intermediaries , who determine prices via strategic considerations .\nTypically , not all buyers and sellers have access to the same intermediaries , and they trade at correspondingly different prices that reflect their relative amounts of power in the market .\nWe model this phenomenon using a game in which buyers , sellers , and traders engage in trade on a graph that represents the access each buyer and seller has to the traders .\nIn this model , traders set prices strategically , and then buyers and sellers react to the prices they are offered .\nWe show that the resulting game always has a subgame perfect Nash equilibrium , and that all equilibria lead to an efficient ( i.e. socially optimal ) allocation of goods .\nWe extend these results to a more general type of matching market , such as one finds in the matching of job applicants and employers .\nFinally , we consider how the profits obtained by the traders depend on the underlying graph -- roughly , a trader can command a positive profit if and only if it has an `` essential '' connection in the network structure , thus providing a graph-theoretic basis for quantifying the amount of competition among traders .\nOur work differs from recent studies of how price is affected by network structure through our modeling of price-setting as a strategic activity carried out by a subset of agents in the system , rather than studying prices set via competitive equilibrium or by a truthful mechanism .\n1 .\nINTRODUCTION\nIn a range of settings where markets mediate the interactions of buyers and sellers , one observes several recurring properties : Individual buyers and sellers often trade through intermediaries , not all buyers and sellers have access to the same intermediaries , and not all buyers and sellers trade at the same price .\nOne example of this setting is the trade of agricultural goods in developing countries .\nGiven inadequate transportation networks , and poor farmers ' limited access to capital , many farmers have no alternative to trading with middlemen in inefficient local markets .\nA developing country may have many such partially overlapping markets existing alongside modern efficient markets [ 2 ] .\nFinancial markets provide a different example of a setting with these general characteristics .\nIn these markets much of the trade between buyers and sellers is intermediated by a variety of agents ranging from brokers to market makers to electronic trading systems .\nFor many assets there is no one market ; trade in a single asset may occur simultaneously on the floor of an exchange , on crossing networks , on electronic exchanges , and in markets in other countries .\nSome buyers and sellers have access to many or all of these trading venues ; others have access to only one or a few of them .\nThe price at which the asset trades may differ across these trading venues .\nIn fact , there is no `` price '' as different traders pay or receive different prices .\nIn many settings there is also a gap between the price a buyer pays for an asset , the ask price , and the price a seller receives for the asset , the bid price .\nOne of the most striking examples of this phenomenon occurs in the market for foreign exchange , where there is an interbank market with restricted access and a retail market with much more open access .\nSpreads , defined as the difference between bid and ask prices , differ significantly across these markets , even though the same asset is being traded in the two markets .\nIn this paper , we develop a framework in which such phenomena emerge from a game-theoretic model of trade , with buyers , sellers , and traders interacting on a network .\nThe edges of the network connect traders to buyers and sellers , and thus represent the access that different market participants have to one another .\nThe traders serve as intermediaries in a two-stage trading game : they strategically choose bid and ask prices to offer to the sellers and buyers they are connected to ; the sellers and buyers then react to the prices they face .\nThus , the network encodes the relative power in the structural positions of the market participants , including the implicit levels of competition among traders .\nWe show that this game always has a\nsubgame perfect Nash equilibrium , and that all equilibria lead to an efficient ( i.e. socially optimal ) allocation of goods .\nWe also analyze how trader profits depend on the network structure , essentially characterizing in graph-theoretic terms how a trader 's payoff is determined by the amount of competition it experiences with other traders .\nOur work here is connected to several lines of research in economics , finance , and algorithmic game theory , and we discuss these connections in more detail later in the introduction .\nAt a general level , our approach can be viewed as synthesizing two important strands of work : one that treats buyer-seller interaction using network structures , but without attempting to model the processses by which prices are actually formed [ 1 , 4 , 5 , 6 , 8 , 9 , 10 , 13 ] ; and another strand in the literature on market microstructure that incorporates price-setting intermediaries , but without network-type constraints on who can trade with whom [ 12 ] .\nBy developing a network model that explicitly includes traders as price-setting agents , in a system together with buyers and sellers , we are able to capture price formation in a network setting as a strategic process carried out by intermediaries , rather than as the result of a centrally controlled or exogenous mechanism .\nThe Basic Model : Indistinguishable Goods .\nOur goal in formulating the model is to express the process of price-setting in markets such as those discussed above , where the participants do not all have uniform access to one another .\nWe are given a set B of buyers , a set S of sellers , and a set T of traders .\nThere is an undirected graph G that indicates who is able to trade with whom .\nAll edges have one end in B U S and the other in T ; that is , each edge has the form ( i , t ) for i E S and t E T , or ( j , t ) for j E B and t E T .\nThis reflects the constraints that all buyer-seller transactions go through traders as intermediaries .\nIn the most basic version of the model , we consider identical goods , one copy of which is initially held by each seller .\nBuyers and sellers each have a value for one copy of the good , and we assume that these values are common knowledge .\nWe will subsequently generalize this to a setting in which goods are distinguishable , buyers can value different goods differently , and potentially sellers can value transactions with different buyers differently as well .\nHaving different buyer valuations captures settings like house purchases ; adding different seller valuations as well captures matching markets -- for example , sellers as job applicants and buyers as employers , with both caring about who ends up with which `` good '' ( and with traders acting as services that broker the job search ) .\nThus , to start with the basic model , there is a single type of good ; the good comes in individisible units ; and each seller initially holds one unit of the good .\nAll three types of agents value money at the same rate ; and each i E B U S additionally values one copy of the good at \u03b8i units of money .\nNo agent wants more than one copy of the good , so additional copies are valued at 0 .\nEach agent has an initial endowment of money that is larger than any individual valuation \u03b8i ; the effect of this is to guarantee that any buyer who ends up without a copy of the good has been priced out of the market due to its valuation and network position , not a lack of funds .\nWe picture each good that is sold flowing along a sequence of two edges : from a seller to a trader , and then from the trader to a buyer .\nThe particular way in which goods flow is determined by the following game .\nFirst , each trader offers a bid price to each seller it is connected to , and an ask price to each buyer it is connected to .\nSellers and buyers then choose from among the offers presented to them by traders .\nIf multiple traders propose the same price to a seller or buyer , then there is no strict best response for the seller or buyer .\nIn this case a selection must be made , and , as is standard ( see for example [ 10 ] ) , we ( the modelers ) choose among the best offers .\nFinally , each trader buys a copy of the good from each seller that accepts its offer , and it sells a copy of the good to each buyer that accepts its offer .\nIf a particular trader t finds that more buyers than sellers accept its offers , then it has committed to provide more copies of the good than it has received , and we will say that this results in a large penalty to the trader for defaulting ; the effect of this is that in equilibrium , no trader will choose bid and ask prices that result in a default .\nMore precisely , a strategy for each trader t is a specification of a bid price 3ti for each seller i to which t is connected , and an ask price \u03b1tj for each buyer j to which t is connected .\n( We can also handle a model in which a trader may choose not to make an offer to certain of its adjacent sellers or buyers . )\nEach seller or buyer then chooses at most one incident edge , indicating the trader with whom they will transact , at the indicated price .\n( The choice of a single edge reflects the facts that ( a ) sellers each initially have only one copy of the good , and ( b ) buyers each only want one copy of the good . )\nThe payoffs are as follows : For each seller i , the payoff from selecting trader t is 3ti , while the payoff from selecting no trader is \u03b8i .\n( In the former case , the seller receives 3ti units of money , while in the latter it keeps its copy of the good , which it values at \u03b8i . )\nFor each buyer j , the payoff from selecting trader t is \u03b8j -- \u03b1tj , whle the payoff from selecting no trader is 0 .\n( In the former case , the buyer receives the good but gives up \u03b1tj units of money . )\nFor each trader t , with accepted offers from sellers i1 , ... , is and buyers j1 , ... , jb , the payoff is Pr \u03b1tjr -- Pr 3tir , minus a penalty \u03c0 if b > s .\nThe penalty is chosen to be large enough that a trader will never incur it in equilibrium , and hence we will generally not be concerned with the penalty .\nThis defines the basic elements of the game .\nThe equilibrium concept we use is subgame perfect Nash equilibrium .\nSome Examples .\nTo help with thinking about the model , we now describe three illustrative examples , depicted in Figure 1 .\nTo keep the figures from getting too cluttered , we adopt the following conventions : sellers are drawn as circles in the leftmost column and will be named i1 , i2 , ... from top to bottom ; traders are drawn as squares in the middle column and will be named t1 , t2 , ... from top to bottom ; and buyers are drawn as circles in the rightmost column and will be named j1 , j2 , ... from top to bottom .\nAll sellers in the examples will have valuations for the good equal to 0 ; the valuation of each buyer is drawn inside its circle ; and the bid or ask price on each edge is drawn on top of the edge .\nIn Figure 1 ( a ) , we show how a standard second-price auction arises naturally from our model .\nSuppose the buyer valuations from top to bottom are w > x > y > z .\nThe bid and ask prices shown are consistent with an equilibrium in which i1 and j1 accept the offers of trader t1 , and no other buyer accepts the offer of its adjacent trader : thus , trader t1 receives the good with a bid price of x , and makes w -- x by selling the good to buyer j1 for w .\nIn this way , we can consider this particular instance as an auction for a single good in which the traders act as `` proxies '' for their adjacent buyers .\nThe buyer with the highest valuation for the good ends up with it , and the surplus is divided between the seller and the associated trader .\nNote that one can construct a k-unit auction with f > k buyers just as easily , by building a complete bipartite graph on k sellers and f traders , and then attaching each trader to a single distinct buyer .\nIn Figure 1 ( b ) , we show how nodes with different positions in the network topology can achieve different payoffs , even when all\nFigure 1 : ( a ) An auction , mediated by traders , in which the buyer with the highest valuation for the good ends up with it .\n( b )\nA network in which the middle seller and buyer benefit from perfect competition between the traders , while the other sellers and buyers have no power due to their position in the network .\n( c ) A form of implicit perfect competition : all bid/ask spreads will be zero in equilibrium , even though no trader directly `` competes '' with any other trader for the same buyer-seller pair .\nbuyer valuations are the same numerically .\nSpecifically , seller i2 and buyer j2 occupy powerful positions , because the two traders are competing for their business ; on the other hand , the other sellers and buyers are in weak positions , because they each have only one option .\nAnd indeed , in every equilibrium , there is a real number x E [ 0 , 1 ] such that both traders offer bid and ask prices of x to i2 and j2 respectively , while they offer bids of 0 and asks of 1 to the other sellers and buyers .\nThus , this example illustrates a few crucial ingredients that we will identify at a more general level shortly .\nSpecifically , i2 and j2 experience the benefits of perfect competition , in that the two traders drive the bid-ask spreads to 0 in competing for their business .\nOn the other hand , the other sellers and buyers experience the downsides of monopoly -- they receive 0 payoff since they have only a single option for trade , and the corresponding trader makes all the profit .\nNote further how this natural behavior emerges from the fact that traders are able to offer different prices to different agents -- capturing the fact that there is no one fixed `` price '' in the kinds of markets that motivate the model , but rather different prices reflecting the relative power of the different agents involved .\nThe previous example shows perhaps the most natural way in which a trader 's profit on a particular transaction can drop to 0 : when there is another trader who can replicate its function precisely .\n( In that example , two traders each had the ability to move a copy of the good from i2 to j2 . )\nBut as our subsequent results will show , traders make zero profit more generally due to global , graph-theoretic reasons .\nThe example in Figure 1 ( c ) gives an initial indication of this : one can show that for every equilibrium , there is a y E [ 0 , 1 ] such that every bid and every ask price is equal to y .\nIn other words , all traders make zero profit , whether or not a copy of the good passes through them -- and yet , no two traders have any seller-buyer paths in common .\nThe price spreads have been driven to zero by a global constraint imposed by the long cycle through all the agents ; this is an example of implicit perfect competition determined by the network topology .\nExtending the Model to Distinguishable Goods .\nWe extend the basic model to a setting with distinguishable goods , as follows .\nInstead of having each agent i E B U S have a single numerical valuation \u03b8i , we index valuations by pairs of buyers and sellers : if buyer j obtains the good initially held by seller i , it gets a utility of \u03b8ji , and if seller i sells its good to buyer j , it experiences a loss of utility of \u03b8ij .\nThis generalizes the case of indistinguishable goods , since we can always have these pairwise valuations depend only on one of the indices .\nA strategy for a trader now consists of offering a bid to each seller that specifies both a price and a buyer , and offering an ask to each buyer that specifies both a price and a seller .\n( We can also handle a model in which a trader offers bids ( respectively , asks ) in the form of vectors , essentially specifying a `` menu '' with a price attached to each buyer ( resp .\nseller ) . )\nEach buyer and seller selects an offer from an adjacent trader , and the payoffs to all agents are determined as before .\nThis general framework captures matching markets [ 10 , 13 ] : for example , a job market that is mediated by agents or employment search services ( as in hiring for corporate executives , or sports or entertainment figures ) .\nHere the sellers are job applicants , buyers are employers , and traders are the agents that mediate the job market .\nOf course , if one specifies pairwise valuations on buyers but just single valuations for sellers , we model a setting where buyers can distinguish among the goods , but sellers do n't care whom they sell to -- this ( roughly ) captures settings like housing markets .\nOur Results .\nOur results will identify general forms of some of the principles noted in the examples discussed above -- including the question of which buyers end up with the good ; the question of how payoffs are differently realized by sellers , traders , and buyers ; and the question of what structural properties of the network determine whether the traders will make positive profits .\nTo make these precise , we introduce the following notation .\nAny outcome of the game determines a final allocation of goods to some of the agents ; this can be specified by a collection M of triples ( ie , te , je ) , where ie E S , te E T , and je E B ; moreover , each seller and each buyer appears in at most one triple .\nThe meaning is for each e E M , the good initially held by ie moves to je through te .\n( Sellers appearing in no triple keep their copy of the good . )\nWe say that the value of the allocation is equal to Pe \u2208 M \u03b8jeie -- \u03b8ieje .\nLet \u03b8 \u2217 denote the maximum value of any allocation M that is feasible given the network .\nWe show that every instance of our game has an equilibrium , and that in every such equilibrium , the allocation has value \u03b8 \u2217 --\nin other words , it achieves the best value possible .\nThus , equilibria in this model are always efficient , in that the market enables the `` right '' set of people to get the good , subject to the network constraints .\nWe establish the existence and efficiency of equilibria by constructing a linear program to capture the flow of goods through the network ; the dual of this linear program contains enough information to extract equilibrium prices .\nBy the definition of the game , the value of the equilibrium allocation is divided up as payoffs to the agents , and it is interesting to ask how this value is distributed -- in particular how much profit a trader is able to make based on its position in the network .\nWe find that , although all equilibria have the same value , a given trader 's payoff can vary across different equilibria .\nHowever , we are able to characterize the maximum and minimum amounts that a given trader is able to make , where these maxima and minima are taken over all equilibria , and we give an efficient algorithm to compute this .\nIn particular , our results here imply a clean combinatorial characterization of when a given trader t can achieve non-zero payoff : this occurs if and only there is some edge e incident to t that is essential , in the sense that deleting e reduces the value of the optimal allocation \u03b8 \u2217 .\nWe also obtain results for the sum of all trader profits .\nRelated Work .\nThe standard baseline approach for analyzing the interaction of buyers and sellers is the Walrasian model in which anonymous buyers and sellers trade a good at a single market clearing price .\nThis reduced form of trade , built on the idealization of a market price , is a powerful model which has led to many insights .\nBut it is not a good model to use to examine where prices come from or exactly how buyers and sellers and trade with each other .\nThe difficulty is that in the Walrasian model there is no agent who sets the price , and agents do n't actually trade with each other .\nIn fact there is no market , in the everyday sense of that word , in the Walrasian model .\nThat is , there is no physical or virtual place where buyers and sellers interact to trade and set prices .\nThus in this simple model , all buyers and sellers are uniform and trade at the same price , and there is also no role for intermediaries .\nThere are several literatures in economics and finance which examine how prices are set rather than just determining equilibrium prices .\nThe literature on imperfect competition is perhaps the oldest of these .\nHere a monopolist , or a group of oliogopolists , choose prices in order to maximize their profits ( see [ 14 ] for the standard textbook treatment of these markets ) .\nA monopolist uses its knowledge of market demand to choose a price , or a collection of prices if it discriminates .\nOliogopolists play a game in which their payoffs depend on market demand and the actions of their competitors .\nIn this literature there are agents who set prices , but the fiction of a single market is maintained .\nIn the equilibrium search literature , firms set prices and consumers search over them ( see [ 3 ] ) .\nConsumers do end up paying different prices , but all consumers have access to all firms and there are no intermediaries .\nIn the general equilibrium literature there have been various attempts to introduce price determination .\nA standard proof technique for the existence of competitive equilibrium involves a price adjustment mechanism in which prices respond to excess demand .\nThe Walrasian auctioneer is often introduced as a device to explain how this process works , but this is a fundamentally a metaphor for an iterative priceupdating algorithm , not for the internals of an actual market .\nMore sophisticated processes have been introduced to study the stability of equilibrium prices or the information necessary to compute them .\nBut again there are no price-setting agents here .\nIn the finance literature the work on market microstructure does have price-setting agents ( specialists ) , parts of it do determine separate bid and ask prices , and different agents receive different prices for the same asset ( see [ 12 ] for a treatment of microstructure theory ) .\nWork in information economics has identified similar phenomena ( see e.g. [ 7 ] ) .\nBut there is little research in these literatures examining the effect of restrictions on who can trade with whom .\nThere have been several approaches to studying how network structure determines prices .\nThese have posited price determination through definitions based on competitive equilibrium or the core , or through the use of truthful mechanisms .\nIn briefly reviewing this work , we will note the contrast with our approach , in that we model prices as arising from the strategic behavior of agents in the system .\nIn recent work , Kakade et al. [ 8 ] have studied the distribution of prices at competitive equilibrium in a bipartite graph on buyers and sellers , generated using a probabilistic model capable of producing heavy-tailed degree distributions [ 11 ] .\nEven-Dar et al. [ 6 ] build on this to consider the strategic aspects of network formation when prices arise from competitive equilibrium .\nLeonard [ 10 ] , Babaioff et al. [ 1 ] , and Chu and Shen [ 4 ] consider an approach based on mechanism design : buyers and sellers reside at different nodes in a graph , and they incur a given transportation cost to trade with one another .\nLeonard studies VCG prices in this setting ; Babaioff et al. and Chu and Shen additionally provide a a budget-balanced mechanism .\nSince the concern here is with truthful mechanisms that operate on private valuations , there is an inherent trade-off between the efficiency of the allocation and the budget-balance condition .\nIn contrast , our model has known valuations and prices arising from the strategic behavior of traders .\nThus , the assumptions behind our model are in a sense not directly comparable to those underlying the mechanism design approach : while we assume known valuations , we do not require a centralized authority to impose a mechanism .\nRather , price-setting is part of the strategic outcome , as in the real markets that motivate our work , and our equilibria are simultaneously budget-balanced and efficient -- something not possible in the mechanism design frameworks that have been used .\nDemange , Gale , and Sotomayor [ 5 ] , and Kranton and Minehart [ 9 ] , analyze the prices at which trade occurs in a network , working within the framework of mechanism design .\nKranton and Minehart use a bipartite graph with direct links between buyers and sellers , and then use an ascending auction mechanism , rather than strategic intermediaries , to determine the prices .\nTheir auction has desirable equilibrium properties but as Kranton and Minehart note it is an abstraction of how goods are allocated and prices are determined that is similar in spirit to the Walrasian auctioneer abstraction .\nIn fact , we can show how the basic model of Kranton and Minehart can be encoded as an instance of our game , with traders producing prices at equilibrium matching the prices produced by their auction mechanism .1 Finally , the classic results of Shapley and Shubik [ 13 ] on the assignment game can be viewed as studying the result of trade on a bipartite graph in terms of the core .\nThey study the dual of a linear program based on the matching problem , similar to what we use for a reduced version of our model in the next section , but their focus is different as they do not consider agents that seek to set prices .\n2 .\nMARKETS WITH PAIR-TRADERS\nFor understanding the ideas behind the analysis of the general model , it is very useful to first consider a special case with a re1Kranton and Minehart , however , can also analyze a more general setting in which buyers values are private and thus buyers and sellers play a game of incomplete information .\nWe deal only with complete information .\nstricted form of traders that we refer to as pair-traders .\nIn this case , each trader is connected to just one buyer and one seller .\n( Thus , it essentially serves as a `` trade route '' between the two . )\nThe techniques we develop to handle this case will form a useful basis for reasoning about the case of traders that may be connected arbitrarily to the sellers and buyers .\nWe will relate profits in a subgame perfect Nash equilibrium to optimal solutions of a certain linear program , use this relation to show that all equilibria result in efficient allocation of the goods , and show that a pure equilibrium always exists .\nFirst , we consider the simplest model where sellers have indistinguishable items , and each buyer is interested in getting one item .\nThen we extend the results to the more general case of a matching market , as discussed in the previous section , where valuations depend on the identity of the seller and buyer .\nWe then characterize the minimum and maximum profits traders can make .\nIn the next section , we extend the results to traders that may be connected to any subset of sellers and buyers .\nGiven that we are working with pair-traders in this section , we can represent the problem using a bipartite graph G whose node set is B U S , and where each trader t , connecting seller i and buyer j , appears as an edge t = ( i , j ) in G. Note , however , that we allow multiple traders to connect the same pair of agents .\nFor each buyer and seller i , we will use adj ( i ) to denote the set of traders who can trade with i.\n2.1 Indistinguishable Goods\nThe socially optimal trade for the case of indistinguishable goods is the solution of the transportation problem : sending goods along the edges representing the traders .\nThe edges along which trade occurs correspond to a matching in this bipartite graph , and the optimal trade is described by the following linear program .\nProof .\nClearly all profits are nonnegative , as trading is optional for all agents .\nTo see why the last set of inequalities holds , consider two cases separately .\nFor a trader t who conducted trade , we get equality by definition .\nFor other traders t = ( i , j ) , the value pi + \u03b8i is the price that seller i sold for ( or \u03b8i if seller i decided to keep the good ) .\nOffering a bid \u03b2t > pi + \u03b8i would get the seller to sell to trader t. Similarly , \u03b8j -- pj is the price that buyer j bought for ( or \u03b8j if he did n't buy ) , and for any ask \u03b1t < \u03b8j -- pj , the buyer will buy from trader t .\nSo unless \u03b8j -- pj < \u03b8i + pi the trader has a profitable deviation .\nNow we are ready to prove our first theorem :\nTHEOREM 2.2 .\nIn any equilibrium the trade is efficient .\nProof .\nLet x be a flow of goods resulting in an equilibrium , and let variables p and y be the profits .\nConsider the linear program describing the socially optimal trade .\nWe will also add a set of additional constraints xt < 1 for all traders t E T ; this can be added to the description , as it is implied by the other constraints .\nNow we claim that the two linear programs are to the equations E duals of each other .\nThe variables pi for agents B U S correspond tEadj ( i ) xt < 1 .\nThe additional dual variable yt corresponds to an additional inequality xt < 1 .\nThe optimality of the social value of the trade will follow from the claim that the solution of these two linear programs derived from an equilibrium satisfy the complementary slackness conditions for this pair of linear programs , and hence both x and ( p , y ) are optimal solutions to the corresponding linear programs .\nThere are three different complementary slackness conditions we need to consider , corresponding to the three sets of variables x , y implies E\nand p. Any agent can only make profit if he transacts , so pi > 0 tEadj ( i ) xt = 1 , and similarly , yt > 0 implies that xt = 1 also .\nFinally , consider a trader t with xt > 0 that trades between seller i and buyer j , and recall that we have seen above that the inequality yt > ( \u03b8j -- pj ) -- ( \u03b8i + pi ) is satisfied with equality for those who trade .\nNext we consider an equilibrium .\nEach trader t = ( i , j ) must offer a bid \u03b2t and an ask \u03b1t .\n( We omit the subscript denoting the seller and buyer here since we are dealing with pair-traders . )\nGiven the bid and ask price , the agents react to these prices , as described earlier .\nInstead of focusing on prices , we will focus on profits .\nIf a seller i sells to a trader t E adj ( i ) with bid \u03b2t then his profit is pi = \u03b2t -- \u03b8i .\nSimilarly , if a buyer j buys from a trader t E adj ( j ) with ask \u03b1t , then his profit is pj = \u03b8j -- \u03b1t .\nFinally , if a trader t trades with ask \u03b1t and bid \u03b2t then his profit is yt = \u03b1t -- \u03b2t .\nAll agents not involved in trade make 0 profit .\nWe will show that the profits at equilibrium are an optimal solution to the following linear program .\nProof .\nConsider an efficient trade ; let xt = 1 if t trades and 0 otherwise ; and consider an optimal solution ( p , y ) to the dual linear program .\nWe would like to claim that all dual solutions correspond to equilibrium prices , but unfortunately this is not exactly true .\nBefore we can convert a dual solution to equilibrium prices , we may need to modify the solution slightly as follows .\nConsider any agent i that is only connected to a single trader t. Because the agent is only connected to a single trader , the variables yt and pi are dual variables corresponding to the same primal inequality xt < 1 , and they always appear together as yt + pi in all inequalities , and also in the objective function .\nThus there is an optimal solution in which pi = 0 for all agents i connected only to a single trader .\nAssume ( p , y ) is a dual solution where agents connected only to one trader have pi = 0 .\nFor a seller i , let \u03b2t = \u03b8i + pi be the bid for all traders t adjacent to i. Similarly , for each buyer j , let \u03b1t = \u03b8j -- pj be the ask for all traders t adjacent to j .\nWe claim that this set of bids and asks , together with the trade x , are an equilibrium .\nTo see why , note that all traders t adjacent to a seller or buyer i offer the same ask or bid , and so trading with any trader is equally good for agent i. Also , if i is not trading in the solution\nx then by complementary slackness pi = 0 , and hence not trading is also equally good for i .\nThis shows that sellers and buyers do n't have an incentive to deviate .\nWe need to show that traders have no incentive to deviate either .\nWhen a trader t is trading with seller i and buyer j , then profitable deviations would involve increasing \u03b1t or decreasing \u03b2t .\nBut by our construction ( and assumption about monopolized agents ) all sellers and buyers have multiple identical ask/bid offers , or trade is occurring at valuation .\nIn either case such a deviation can not be successful .\nFinally , consider a trader t = ( i , j ) who does n't trade .\nA deviation for t would involve offering a lower ask to seller i and a higher bid to seller j than their current trade .\nHowever , yt = 0 by complementary slackness , and hence pi + \u03b8i > \u03b8j -- pj , so i sells for a price at least as high as the price at which j buys , so trader t can not create profitable trade .\nNote that a seller or buyer i connected to a single trader t can not have profit at equilibrium , so possible equilibrium profits are in one-to-one correspondence with dual solutions for which pi = 0 whenever i is monopolized by one trader .\nA disappointing feature of the equilibrium created by this proof is that some agents t may have to create ask-bid pairs where \u03b2t > \u03b1t , offering to buy for more than the price at which they are willing to sell .\nAgents that make such crossing bid-ask pairs never actually perform a trade , so it does not result in negative profit for the agent , but such pairs are unnatural .\nCrossing bid-ask pairs are weakly dominated by the strategy of offering a low bid \u03b2 = 0 and an extremely high ask to guarantee that neither is accepted .\nTo formulate a way of avoiding such crossing pairs , we say an equilibrium is cross-free if \u03b1t > \u03b2t for all traders t .\nWe now show there is always a cross-free equilibrium .\nProof .\nConsider an optimal solution to the dual linear program .\nTo get an equilibrium without crossing bids , we need to do a more general modification than just assuming that pi = 0 for all sellers and buyers connected to only a single trader .\nLet the set E be the set of edges t = ( i , j ) that are tight , in the sense that we have the equality yt = ( \u03b8j -- pj ) -- ( \u03b8i + pi ) .\nThis set E contain all the edges where trade occurs , and some more edges .\nWe want to make sure that pi = 0 for all sellers and buyers that have degree at most 1 in E. Consider a seller i that has pi > 0 .\nWe must have i involved in a trade , and the edge t = ( i , j ) along which the trade occurs must be tight .\nSuppose this is the only tight edge adjacent to agent i ; then we can decrease pi and increase yt till one of the following happens : either pi = 0 or the constraint of some other agent t ' E adj ( i ) becomes tight .\nThis change only increases the set of tight edges E , keeps the solution feasible , and does not change the objective function value .\nSo after doing this for all sellers , and analogously changing yt and pj for all buyers , we get an optimal solution where all sellers and buyers i either have pi = 0 or have at least two adjacent tight edges .\nNow we can set asks and bids to form a cross-free equilibrium .\nFor all traders t = ( i , j ) associated with an edge t E E we set \u03b1t and \u03b2t as before : we set the bid \u03b2t = pi + \u03b8i and the ask \u03b1t = \u03b8j -- pj .\nFor a trader t = ( i , j ) E ~ E we have that pi + \u03b8i > \u03b8j -- pj and we set \u03b1t = \u03b2t to be any value in the range [ \u03b8j -- pj , pi + \u03b8i ] .\nThis guarantees that for each seller or buyer the best sell or buy offer is along the edge where trade occurs in the solution .\nThe askbid values along the tight edges guarantee that traders who trade can not increase their spread .\nTraders t = ( i , j ) who do not trade can not make profit due to the constraint pi + \u03b8i > \u03b8j -- pj\nFigure 2 : Left : an equilibrium with crossing bids where traders make no money .\nRight : an equilibrium without crossing bids for any value x E [ 0 , 1 ] .\nTotal trader profit ranges between 1 and 2 .\n2.2 Distinguishable Goods\nWe now consider the case of distinguishable goods .\nAs in the previous section , we can write a transshipment linear program for the socially optimal trade , with the only change being in the objective function .\nWe can show that the dual of this linear program corresponds to trader profits .\nRecall that we needed to add the constraints xt < 1 for all traders .\nThe dual is then :\nIt is not hard to extend the proofs of Theorems 2.2 -- 2.4 to this case .\nProfits in an equilibrium satisfy the dual constraints , and profits and trade satisfy complementary slackness .\nThis shows that trade is socially optimal .\nTaking an optimal dual solution where pi = 0 for all agents that are monopolized , we can convert it to an equilibrium , and with a bit more care , we can also create an equilibrium with no crossing bid-ask pairs .\nTHEOREM 2.5 .\nAll equilibria for the case of pair-traders with distinguishable goods result in socially optimal trade .\nPure noncrossing equilibria exist .\n2.3 Trader Profits\nWe have seen that all equilibria are efficient .\nHowever , it turns out that equilibria may differ in how the value of the allocation is spread between the sellers , buyers and traders .\nFigure 2 depicts a simple example of this phenomenon .\nOur goal is to understand how a trader 's profit is affected by its position in the network ; we will use the characterization we obtained to work out the range of profits a trader can make .\nTo maximize the profit of a trader t ( or a subset of traders T ' ) all we need to do is to find an optimal solution to the dual linear program maximizing the value of yt ( or the sum P tET , yt ) .\nSuch dual solutions will then correspond to equilibria with non-crossing prices .\nTHEOREM 2.6 .\nFor any trader t or subset of traders T ' the maximum total profit they can make in any equilibrium can be computed in polynomial time .\nThis maximum profit can be obtained by a non-crossing equilibrium .\nOne way to think about the profit of a trader t = ( i , j ) is as a subtraction from the value of the corresponding edge ( i , j ) .\nThe value of the edge is the social value \u03b8ji -- \u03b8ij if the trader makes no profit , and decreases to \u03b8ji -- \u03b8ij -- yt if the trader t insists on making yt profit .\nTrader t gets yt profit in equilibrium , if after this decrease in the value of the edge , the edge is still included in the optimal transshipment .\nTHEOREM 2.7 .\nA trader t can make profit in an equilibrium if and only if t is essential for the social welfare , that is , if deleting agent t decreases social welfare .\nThe maximum profit he can make is exactly his value to society , that is , the increase his presence causes in the social welfare .\nIf we allow crossing equilibria , then we can also find the minimum possible profit .\nRecall that in the proof of Theorem 2.3 , traders only made money off of sellers or buyers that they have a monopoly over .\nAllowing such equilibria with crossing bids we can find the minimum profit a trader or set of traders can make , by minimizing the value yt ( or sum PtET , yt ) over all optimal solutions that satisfy pi = 0 whenever i is connected to only a single trader .\n3 .\nGENERAL TRADERS\nNext we extend the results to a model where traders may be connected to an arbitrary number of sellers and buyers .\nFor a trader t E T we will use S ( t ) and B ( t ) to denote the set of buyers and sellers connected to trader t .\nIn this section we focus on the general case when goods are distinguishable ( i.e. both buyers and sellers have valuations that are sensitive to the identity of the agent they are paired with in the allocation ) .\nIn the full version of the paper we also discuss the special case of indistinguishable goods in more detail .\nTo get the optimal trade , we consider the bipartite graph G = ( S U B , E ) connecting sellers and buyers where an edge e = ( i , j ) connects a seller i and a buyer j if there is a trader adjacent to both : E = f ( i , j ) : adj ( i ) n adj ( j ) = ~ 01 .\nOn this graph , we then solve the instance of the assignment problem that was also used in Section 2.2 , with the value of edge ( i , j ) equal to \u03b8ji -- \u03b8ij ( since the value of trading between i and j is independent of which trader conducted the trade ) .\nWe will also use the dual of this linear program : Xmin val ( z ) = zi > zi + zj >\n3.1 Bids and Asks and Trader Optimization\nFirst we need to understand what bidding model we will use .\nEven when goods are indistinguishable , a trader may want to pricediscriminate , and offer different bid and ask values to different sellers and buyers .\nIn the case of distinguishable goods , we have to deal with a further complication : the trader has to name the good she is proposing to sell or buy , and can possibly offer multiple different products .\nThere are two variants of our model depending whether a trader makes a single bid or ask to a seller or buyer , or she offers a menu of options .\n( i ) A trader t can offer a buyer j a menu of asks \u03b1tji , a vector of values for all the products that she is connected to , where \u03b1tji is the ask for the product of seller i. Symmetrically , a trader t can offer to each seller i a menu of bids 3tij for selling to different buyers j. ( ii ) Alternatively , we can require that each trader t can make at most one ask to each seller and one bid for each buyer , and an ask has to include the product sold , and a bid has to offer a particular buyer to sell to .\nOur results hold in either model .\nFor notational simplicity we will use the menu option here .\nNext we need to understand the optimization problem of a trader t. Suppose we have bid and ask values for all other traders t ' E T , t ' = ~ t .\nWhat are the best bid and ask offers trader t can make as a best response to the current set of bids and asks ?\nFor each seller i let pi be the maximum profit seller i can make using bids by other traders , and symmetrically assume pj is the maximum profit buyer j can make using asks by other traders ( let pi = 0 for any seller or buyer i who can not make profit ) .\nNow consider a seller-buyer pair ( i , j ) that trader t can connect .\nTrader t will have to make a bid of at least 3tij = \u03b8ij + pi to seller i and an ask of at most \u03b1tji = \u03b8ji -- pj to buyer j to get this trade , so the maximum profit she can make on this trade is vtij = \u03b1tji -- 3tij = \u03b8ji -- pj -- ( \u03b8ij + pi ) .\nThe optimal trade for trader t is obtained by solving a matching problem to find the matching between the sellers S ( t ) and buyers B ( t ) that maximizes the total value vtij for trader t .\nWe will need the dual of the linear program of finding the trade of maximum profit for the trader t .\nWe will use qti as the dual variable associated with the constraint of seller or buyer i .\nThe dual is then the following problem .\nWe view qti as the profit made by t from trading with seller or buyer i. Theorem 3.1 summarizes the above discussion .\n3.2 Efficient Trade and Equilibrium\nNow we can prove trade at equilibrium is always efficient .\nTHEOREM 3.2 .\nEvery equilibrium results in an efficient allocation of the goods .\nProof .\nConsider an equilibrium , with xe = 1 if and only if trade occurs along edge e = ( i , j ) .\nTrade is a solution to the transshipment linear program used in Section 2.2 .\nLet pi denote the profit of seller or buyer i. Each trader t currently has the best solution to his own optimization problem .\nA trader t finds his optimal trade ( given bids and asks by all other\ntraders ) by solving a matching problem .\nLet qti for i E B ( t ) US ( t ) denote the optimal dual solution to this matching problem as described by Theorem 3.1 .\nWhen setting up the optimization problem for a trader t above , we used pi to denote the maximum profit i can make without the offer of trader t. Note that this pi is exactly the same pi we use here , the profit of agent i .\nThis is clearly true for all traders t ' that are not trading with i in the equilibrium .\nTo see why it is true for the trader t that i is trading with we use that the current set of bid-ask values is an equilibrium .\nIf for any agent i the bid or ask of trader t were the unique best option , then t could extract more profit by offering a bit larger ask or a bit smaller bid , a contradiction .\nWe show the trade x is optimal by considering the dual solution t qti for all agents i E B U S .\nWe claim z is a dual solution , and it satisfies complementary slackness with trade x. To see this we need to show a few facts .\nWe need that zi > 0 implies that i trades .\nIf zi > 0 then either pi > 0 or qti > 0 for some trader t. Agent i can only make profit pi > 0 if he is involved in a trade .\nIf qti > 0 for some t , then trader t must trade with i , as his solution is optimal , and by complementary slackness for the dual solution , qti > 0 implies that t trades with i. For an edge ( i , j ) associated with a trader t we need to show the dual solution is feasible , that is zi + zj > \u03b8ji -- \u03b8ij .\nRecall vtij = \u03b8ji -- pj -- ( \u03b8ij + pi ) , and the dual constraint of the trader 's optimization problem requires qti + qtj > vtij .\nPutting these together , we have zi + zj > pi + qti + pj + qtj > vtij + pi + pj = \u03b8ji -- \u03b8ij .\nFinally , we need to show that the trade variables x also satisfy the complementary slackness constraint : when xe > 0 for an edge e = ( i , j ) then the corresponding dual constraint is tight .\nLet t be the trader involved in the trade .\nBy complementary slackness of t 's optimization problem we have qti + qtj = vtij .\nTo see that z satisfies complementary slackness we need to argue that for all other traders t ' = ~ t we have both qtii = 0 and qt j = 0 .\nThis is true as qtii > 0 implies by complementary slackness of t ' 's optimization problem that t ' must trade with i at optimum , and t = ~ t ' is trading .\nNext we want to show that a non-crossing equilibrium always exists .\nWe call an equilibrium non-crossing if the bid-ask offers a trader t makes for a seller -- buyer pair ( i , j ) never cross , that is 3tij < \u03b1tji for all t , i , j. THEOREM 3.3 .\nThere exists a non-crossing equilibrium supporting any socially optimal trade .\nProof .\nConsider an optimal trade x and a dual solution z as before .\nTo find a non-crossing equilibrium we need to divide the profit zi between i and the trader t trading with i .\nWe will use qti as the trader t 's profit associated with agent i for any i E S ( t ) U B ( t ) .\nWe will need to guarantee the following properties : Trader t trades with agent i whenever qti > 0 .\nThis is one of the complementary slackness conditions to make sure the current trade is optimal for trader t. For all seller-buyer pairs ( i , j ) that a trader t can trade with , we have\nwhich will make sure that qt is a feasible dual solution for the optimization problem faced by trader t .\nWe need to have equality in ( 1 ) when trader t is trading between i and j .\nThis is one of the complementary slackness conditions for trader t , and will ensure that the trade of t is optimal for the trader .\nFinally , we want to arrange that each agent i with pi > 0 has multiple offers for making profit pi , and the trade occurs at one of his best offers .\nTo guarantee this in the corresponding bids and asks we need to make sure that whenever pi > 0 there are multiple t E adj ( i ) that have equation in the above constraint ( 1 ) .\nWe start by setting pi = zi for all i E S U B and qti = 0 for all i E S U B and traders t E adj ( i ) .\nThis guarantees all invariants except the last property about multiple t E adj ( t ) having equality in ( 1 ) .\nWe will modify p and q to gradually enforce the last condition , while maintaining the others .\nConsider a seller with pi > 0 .\nBy optimality of the trade and dual solution z , seller i must trade with some trader t , and that trader will have equality in ( 1 ) for the buyer j that he matches with i .\nIf this is the only trader t that has a tight constraint in ( 1 ) involving seller i then we increase qti and decrease pi till either pi = 0 or another trader t ' = ~ t will be achieve equality in ( 1 ) for some buyer edge adjacent to i ( possibly a different buyer j' ) .\nThis change maintains all invariants , and increases the set of sellers that also satisfy the last constraint .\nWe can do a similar change for a buyer j that has pj > 0 and has only one trader t with a tight constraint ( 1 ) adjacent to j .\nAfter possibly repeating this for all sellers and buyers , we get profits satisfying all constraints .\nNow we get equilibrium bid and ask values as follows .\nFor a trader t that has equality for the seller -- buyer pair ( i , j ) in ( 1 ) we offer \u03b1tji = \u03b8ji -- pj and 3tij = \u03b8ij + pi .\nFor all other traders t and seller -- buyer pairs ( i , j ) we have the invariant ( 1 ) , and using this we know we can pick a value - y in the range \u03b8ij + pi + qti > - y > \u03b8ji -- ( pj + qtj ) .\nWe offer bid and ask values 3tij = \u03b1tji = - y. Neither the bid nor the ask will be the unique best offer for the buyer , and hence the trade x remains an equilibrium .\n3.3 Trader Profits\nFinally we turn to the goal of understanding , in the case of general traders , how a trader 's profit is affected by its position in the network .\nThe profit of trader t in an equilibrium is E First , we show how to maximize the total profit of a set of traders .\ni qti .\nTo find the maximum possible profit for a trader t or a set of traders T ' , we need to do the following : Find profits pi > 0 and qti > 0 so that zi = pi + EtEadj ( i ) qti is an optimal dual solution , and also satisfies the constraints ( 1 ) for any seller i and buyer j connected through a trader t E T. Now , subject to all these conditions , we maximize the sum E EiES ( t ) , B ( t ) qti .\nNote that this maxi\ntETi\nmization is a secondary objective function to the primary objective that z is an optimal dual solution .\nThen we use the proof of Theorem 3.3 shows how to turn this into an equilibrium .\nTHEOREM 3.4 .\nThe maximum value for E i qti above\ntETi\nis the maximum profit the set T ' of traders can make .\nProof .\nBy the proof of Theorem 3.2 the profits of trader t can be written in this form , so the set of traders T ' can not make more profit than claimed in this theorem .\nTo see that T ' can indeed make this much profit , we use the proof of Theorem 3.3 .\nWe modify that proof to start with profit vectors p and qt for t E T ' , and set qt = 0 for all traders t E ~ T ' .\nWe verify that this starting solution satisfies the first three of the four required properties , and then we can follow the proof to make the fourth property true .\nWe omit the details of this in the present version .\nIn Section 2.3 we showed that in the case of pair traders , a trader t can make money if he is essential for efficient trade .\nThis is not\nFigure 3 : The top trader is essential for social welfare .\nYet the only equilibrium is to have bid and ask values equal to 0 , and the trader makes no profit .\ntrue for the type of more general traders we consider here , as shown by the example in Figure 3 .\nHowever , we still get a characterization for when a trader t can make a positive profit .\nTHEOREM 3.5 .\nA trader t can make profit in an equilibrium if and only if there is a seller or buyer i adjacent to t such that the connection of trader t to agent i is essential for social welfare -- that is , if deleting agent t from adj ( i ) decreases the value of the optimal allocation .\nProof .\nFirst we show the direction that if a trader t can make money there must be an agent i so that t 's connection to i is essential to social welfare .\nLet p , q be the profits in an equilibrium where t makes money , as described by Theorem 3.2 with Pi \u2208 S ( t ) \u222a B ( t ) qti > 0 .\nSo we have some agent i with qti > 0 .\nWe claim that the connection between agent i and trader t must be essential , in particular , we claim that social welfare must decrease by at least qti if we delete t from adj ( t ) .\nTo see why note that decreasing the value of all edges of the form ( i , j ) associated with trader t by qti keeps the same trade optimum , as we get a matching dual solution by simply resetting qti to zero .\nTo see the opposite , assume deleting t from adj ( t ) decreases social welfare by some value - y. Assume i is a seller ( the case of buyers is symmetric ) , and decrease by - y the social value of each edge ( i , j ) for any buyer j such that t is the only agent connecting i and j. By assumption the trade is still optimal , and we let z be the dual solution for this matching .\nNow we use the same process as in the proof of Theorem 3.3 to create a non-crossing equilibrium starting with pi = zi for all i \u2208 S \u222a B , and qti = - y , and all other q values 0 .\nThis creates an equilibrium with non-crossing bids where t makes at least - y profit ( due to trade with seller i ) .\nFinally , if we allow crossing equilibria , then we can find the minimum possible profit by simply finding a dual solution minimizing the dual variables associated with agents monopolized by some trader .\nTHEOREM 3.6 .\nFor any trader t or subset of traders T ' , the minimum total profit they can make in any equilibrium can be computed in polynomial time ."} {"id": "C-20", "title": "", "abstract": "", "keyphrases": ["internet-base servic", "data center migrat", "wan", "lan", "virtual server", "storag replic", "synchron replic", "asynchron replic", "network support", "storag", "voic-over-ip", "voip", "databas"], "prmu": [], "lvl-1": "Live Data Center Migration across WANs: A Robust Cooperative Context Aware Approach K.K. Ramakrishnan, Prashant Shenoy , Jacobus Van der Merwe AT&T Labs-Research / University of Massachusetts ABSTRACT A significant concern for Internet-based service providers is the continued operation and availability of services in the face of outages, whether planned or unplanned.\nIn this paper we advocate a cooperative, context-aware approach to data center migration across WANs to deal with outages in a non-disruptive manner.\nWe specifically seek to achieve high availability of data center services in the face of both planned and unanticipated outages of data center facilities.\nWe make use of server virtualization technologies to enable the replication and migration of server functions.\nWe propose new network functions to enable server migration and replication across wide area networks (e.g., the Internet), and finally show the utility of intelligent and dynamic storage replication technology to ensure applications have access to data in the face of outages with very tight recovery point objectives.\nCategories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems General Terms Design, Reliability 1.\nINTRODUCTION A significant concern for Internet-based service providers is the continued operation and availability of services in the face of outages, whether planned or unplanned.\nThese concerns are exacerbated by the increased use of the Internet for mission critical business and real-time entertainment applications.\nA relatively minor outage can disrupt and inconvenience a large number of users.\nToday these services are almost exclusively hosted in data centers.\nRecent advances in server virtualization technologies [8, 14, 22] allow for the live migration of services within a local area network (LAN) environment.\nIn the LAN environment, these technologies have proven to be a very effective tool to enable data center management in a non-disruptive fashion.\nNot only can it support planned maintenance events [8], but it can also be used in a more dynamic fashion to automatically balance load between the physical servers in a data center [22].\nWhen using these technologies in a LAN environment, services execute in a virtual server, and the migration services provided by the underlying virtualization framework allows for a virtual server to be migrated from one physical server to another, without any significant downtime for the service or application.\nIn particular, since the virtual server retains the same network address as before, any ongoing network level interactions are not disrupted.\nSimilarly, in a LAN environment, storage requirements are normally met via either network attached storage (NAS) or via a storage area network (SAN) which is still reachable from the new physical server location to allow for continued storage access.\nUnfortunately in a wide area environment (WAN), live server migration is not as easily achievable for two reasons: First, live migration requires the virtual server to maintain the same network address so that from a network connectivity viewpoint the migrated server is indistinguishable from the original.\nWhile this is fairly easily achieved in a shared LAN environment, no current mechanisms are available to efficiently achieve the same feat in a WAN environment.\nSecond, while fairly sophisticated remote replication mechanisms have been developed in the context of disaster recovery [20, 7, 11], these mechanisms are ill suited to live data center migration, because in general the available technologies are unaware of application/service level semantics.\nIn this paper we outline a design for live service migration across WANs.\nOur design makes use of existing server virtualization technologies and propose network and storage mechanisms to facilitate migration across a WAN.\nThe essence of our approach is cooperative, context aware migration, where a migration management system orchestrates the data center migration across all three subsystems involved, namely the server platforms, the wide area network and the disk storage system.\nWhile conceptually similar in nature to the LAN based work described above, using migration technologies across a wide area network presents unique challenges and has to our knowledge not been achieved.\nOur main contribution is the design of a framework that will allow the migration across a WAN of all subsystems involved with enabling data center services.\nWe describe new mechanisms as well as extensions to existing technologies to enable this and outline the cooperative, context aware functionality needed across the different subsystems to enable this.\n262 2.\nLIVE DATA CENTER MIGRATION ACROSS WANS Three essential subsystems are involved with hosting services in a data center: First, the servers host the application or service logic.\nSecond, services are normally hosted in a data center to provide shared access through a network, either the Internet or virtual private networks (VPNs).\nFinally, most applications require disk storage for storing data and the amount of disk space and the frequency of access varies greatly between different services/applications.\nDisruptions, failures, or in general, outages of any kind of any of these components will cause service disruption.\nFor this reason, prior work and current practices have addressed the robustness of individual components.\nFor example, data centers typically have multiple network connections and redundant LAN devices to ensure redundancy at the networking level.\nSimilarly, physical servers are being designed with redundant hot-swappable components (disks, processor blades, power supplies etc).\nFinally, redundancy at the storage level can be provided through sophisticated data mirroring technologies.\nThe focus of our work, however, is on the case where such local redundancy mechanisms are not sufficient.\nSpecifically, we are interested in providing service availability when the data center as a whole becomes unavailable, for example because of data center wide maintenance operations, or because of catastrophic events.\nAs such, our basic approach is to migrate services between data centers across the wide are network (WAN).\nBy necessity, moving or migrating services from one data center to another needs to consider all three of these components.\nHistorically, such migration has been disruptive in nature, requiring downtime of the actual services involved, or requiring heavy weight replication techniques.\nIn the latter case concurrently running replicas of a service can be made available thus allowing a subset of the service to be migrated or maintained without impacting the service as a whole.\nWe argue that these existing mechanisms are inadequate to meet the needs of network-based services, including real-time services, in terms of continuous availability and operation.\nInstead, we advocate an approach where server, network and storage subsystems cooperate and coordinate actions, in a manner that is cognizant of the service context in order to realize seamless migration across wide area networks.\nIn this section we briefly describe the technical building blocks that would enable our approach.\nAs outlined below, some of these building blocks exist, or exist in part, while in other cases we use the desire for high availability of services as the driver for the changes we are proposing.\n2.1 Live Virtual Server Migration The main enabler for our approach is the live server migration capabilities that have been developed in the context of server virtualization in recent years [5, 8].\nIn this approach an entire running operating system (including any active applications) executing as a virtual server is being transfered from one physical machine to another.\nSince the virtual server is migrated in its entirety, both application and kernel level state gets migrated, including any state associated with ongoing network connections.\nAssuming that network level reachability to the virtual server``s network addresses are maintained after the migration, the implication is that applications executing in the virtual server experience very little downtime (in the order of tens to hundreds of milliseconds) and ongoing network connections remain intact.\nIn order to maintain network level reachability, the IP address(es) associated with the virtual server has to be reachable at the physical server where the virtual server is migrated to.\nIn a LAN environment this is achieved either by issuing an unsolicited ARP reply to establish the binding between the new MAC address and the IP address, or by relying on layer-two technologies to allow the virtual server to reuse its (old) MAC address [8].\nBecause of the difficulty of moving network level (i.e., IP addresses) in a routed non-LAN environment, use of live server migration as a management tool has been limited to the LAN environments [22].\nHowever, virtual server migration across the wide area will also be an attractive tool, specifically to deal with outages, and therefore propose networking mechanisms to enable this.\nIf disk storage needs are being met with network attached storage (NAS), the storage becomes just another network based application and can therefore be addressed in the same way with LAN based migration [8].\nModern virtualization environments also include support for other forms of (local) storage including storage area networks (SANs) [23].\nHowever, since we propose to use WAN server migration as a means to deal with complete data center outages, these mechanisms are inadequate for our purposes and below we propose extension to remote replication technologies which can work in concert with server migration to minimize service downtime.\n2.2 Networking Requirements From the discussion above, a key requirement for live server migration across a WAN is the ability to have the IP address(es) of the virtual server be reachable at the new data center location immediately after the migration has completed.\nThis presents a significant challenge for a number of reasons.\nFirst, despite decades of work in this area, IP address mobility remains an unresolved problem that is typically only addressed at manual configuration time scales.\nThe second challenge comes from the fact that current routing protocols are well known to have convergence issues which is ill suited to the time constraints imposed by live migration.\nThird, in today``s WAN networking environment connectivity changes are typically initiated, and controlled, by network operators or network management systems.\nAgain this is poorly suited to WAN server migration where it is essential that the migration software, which is closely monitoring the status of the server migration process, initiate this change at the appropriate time.\nOur approach to addressing the networking requirements for live WAN migration builds on the observations that not all networking changes in this approach are time critical and further that instantaneous changes are best achieved in a localized manner.\nSpecifically, in our solution, described in detail in Section 3, we allow the migration software to initiate the necessary networking changes as soon as the need for migration has been identified.\nWe make use of tunneling technologies during this initial phase to preemptively establish connectivity between the data centers involved.\nOnce server migration is complete, the migration software initiates a local change to direct traffic towards the new data center via the tunnel.\nSlower time scale network changes then phase out this local network connectivity change for a more optimal network wide path to the new data center.\n2.3 Storage Replication Requirements Data availability is typically addressed by replicating business data on a local/primary storage system, to some remote location from where it can be accessed.\nFrom a business/usability point of view, such remote replication is driven by two metrics [9].\nFirst 263 is the recovery-point-objective which is the consistent data point to which data can be restored after a disaster.\nSecond is the recoverytime-objective which is the time it takes to recover to that consistent data point after a disaster [13].\nRemote replication can be broadly classified into the following two categories: \u00a1 Synchronous replication: every data block written to a local storage system is replicated to the remote location before the local write operation returns.\n\u00a1 Asynchronous replication: in this case the local and remote storage systems are allowed to diverge.\nThe amount of divergence between the local and remote copies is typically bounded by either a certain amount of data, or by a certain amount of time.\nSynchronous replication is normally recommended for applications, such as financial databases, where consistency between local and remote storage systems is a high priority.\nHowever, these desirable properties come at a price.\nFirst, because every data block needs to be replicated remotely, synchronous replication systems can not benefit from any local write coalescing of data if the same data blocks are written repeatedly [16].\nSecond, because data have to be copied to the remote location before the write operation returns, synchronous replication has a direct performance impact on the application, since both lower throughput and increased latency of the path between the primary and the remote systems are reflected in the time it takes for the local disk write to complete.\nAn alternative is to use asynchronous replication.\nHowever, because the local and remote systems are allowed to diverge, asynchronous replication always involves some data loss in the event of a failure of the primary system.\nBut, because write operations can be batched and pipelined, asynchronous replication systems can move data across the network in a much more efficient manner than synchronous replication systems.\nFor WAN live server migration we seek a more flexible replication system where the mode can be dictated by the migration semantics.\nSpecifically, to support live server migration we propose a remote replication system where the initial transfer of data between the data centers is performed via asynchronous replication to benefit from the efficiency of that mode of operation.\nWhen the bulk of the data have been transfered in this manner, replication switches to synchronous replication in anticipation of the completion of the server migration step.\nThe final server migration step triggers a simultaneous switch-over to the storage system at the new data center.\nIn this manner, when the virtual server starts executing in the new data center, storage requirements can be locally met.\n3.\nWAN MIGRATION SCENARIOS In this section we illustrate how our cooperative, context aware approach can combine the technical building blocks described in the previous section to realize live server migration across a wide area network.\nWe demonstrate how the coordination of server virtualization and migration technologies, the storage replication subsystem and the network can achieve live migration of the entire data center across the WAN.\nWe utilize different scenarios to demonstrate our approach.\nIn Section 3.1 we outline how our approach can be used to achieve the safe live migration of a data center when planned maintenance events are handled.\nIn Section 3.2 we show the use of live server migration to mitigate the effects of unplanned outages or failures.\n3.1 Maintenance Outages We deal with maintenance outages in two parts.\nFirst, we consider the case where the service has no (or very limited) storage requirements.\nThis might for example be the case with a network element such as a voice-over-IP (VoIP) gateway.\nSecond, we deal with the more general case where the service also requires the migration of data storage to the new data center.\nWithout Requiring Storage to be Migrated: Without storage to be replicated, the primary components that we need to coordinate are the server migration and network mobility.\nFigure 1 shows the environment where the application running in a virtual server VS has to be moved from a physical server in data center A to a physical server in data center B. Prior to the maintenance event, the coordinating migration management system (MMS) would signal to both the server management system as well as the network that a migration is imminent.\nThe server management system would initiate the migration of the virtual server from physical server a (cents$#\u00a6\u00a5 ) to physical server b (cents$#\u00a6\u00a7 ).\nAfter an initial bulk state transfer as preparation for migration, the server management system will mirror any state changes between the two virtual servers.\nSimilarly, for the network part, based on the signal received from the MMS, the service provider edge (cents\u00a9\u00a8 ) router will initiate a number of steps to prepare for the migration.\nSpecifically, as shown in Figure 1(b), the migration system will cause the network to create a tunnel between cents\u00a9\u00a8 and cents\u00a9\u00a8 which will be used subsequently to transfer data destined to VS to data center B.\nWhen the MMS determines a convenient point to quiesce the VS, another signal is sent to both the server management system and the network.\nFor the server management system, this signal will indicate the final migration of the VS from data center A to data center B, i.e., after this the VS will become active in data center B. For the network, this second signal enables the network data path to switchover locally at cents\u00a9\u00a8\u00a9\u00a5 to the remote data center.\nSpecifically, from this point in time, any traffic destined for the virtual server address that arrives at cents\u00a9\u00a8\u00a9\u00a5 will be switched onto the tunnel to cents\u00a9\u00a8\u00a9\u00a7 for delivery to data center B. Note that at this point, from a server perspective the migration is complete as the VS is now active in data center B. However, traffic is sub-optimally flowing first to cents\u00a9\u00a8\u00a9\u00a5 and then across the tunnel to cents\u00a9\u00a8$\u00a7 .\nTo rectify this situation another networking step is involved.\nSpecifically, cents\u00a9\u00a8\u00a9\u00a7 starts to advertise a more preferred route to reach VS, than the route currently being advertised by cents\u00a9\u00a8$\u00a5 .\nIn this manner, as ingress PEs to the network (cents\u00a9\u00a8$ to cents\u00a9\u00a8$ in Figure 1) receive the more preferred route, traffic will start to flow to cents\u00a9\u00a8\u00a9\u00a7 directly and the tunnel between cents\u00a9\u00a8\u00a9\u00a5 and cents\u00a9\u00a8\u00a9\u00a7 can be torn down leading to the final state shown in Figure 1(c).\nRequiring Storage Migration: When storage has to also be replicated, it is critical that we achieve the right balance between performance (impact on the application) and the recovery point or data loss when the switchover occurs to the remote data center.\nTo achieve this, we allow the storage to be replicated asynchronously, prior to any initiation of the maintenance event, or, assuming the amount of data to be transfered is relatively small, asynchronous replication can be started in anticipation of a migration that is expected to happen shortly.\nAsynchronous replication during this initial phase allows for the application to see no performance impact.\nHowever, when the maintenance event is imminent, the MMS would signal to the replication system to switch from asynchronous replication to synchronous replication to ensure that there is no loss of data during migration.\nWhen data is being replicated synchronously, there will be a performance impact on the application.\n264 Figure 1: Live server migration across a WAN This requires us to keep the exposure to the amount of time we replicate on a synchronous basis to a minimum.\nWhen the MMS signals to the storage system the requirement to switch to synchronous replication, the storage system completes all the pending asynchronous operations and then proceeds to perform all the subsequent writes by synchronously replicating it to the remote data center.\nThus, between the server migration and synchronous replication, both the application state and all the storage operations are mirrored at the two environments in the two data centers.\nWhen all the pending write operations are copied over, then as in the previous case, we quiesce the application and the network is signaled to switch traffic over to the remote data center.\nFrom this point on, both storage and server migration operations are complete and activated in data center B.\nAs above, the network state still needs to be updated to ensure optimal data flow directly to data center B. Note that while we have described the live server migration process as involving the service provider for the networking part, it is possible for a data center provider to perform a similar set of functions without involving the service provider.\nSpecifically, by creating a tunnel between the customer edge (CE) routers in the data center, and performing local switching on the appropriate CE, rather than on the PE, the data center provider can realize the same functionality.\n3.2 Unplanned Outages We propose to also use cooperative, context aware migration to deal with unplanned data center outages.\nThere are multiple considerations that go into managing data center operations to plan and overcome failures through migration.\nSome of these are: (1) amount of overhead under normal operation to overcome anticipated failures; (2) amount of data loss affordable (recovery point objective - RPO); (3) amount of state that has to be migrated; and (4) time available from anticipated failure to occurrence of event.\nAt the one extreme, one might incur the overhead of completely mirroring the application at the remote site.\nThis has the consequence of both incurring processing and network overhead under normal operation as well as impacting application performance (latency and throughput) throughout.\nThe other extreme is to only ensure data recovery and to start a new copy of the application at the remote site after an outage.\nIn this case, application memory state such as ongoing sessions are lost, but data stored on disk is replicated and available in a consistent state.\nNeither this hot standby nor the cold standby approach described are desirable due to the overhead or the loss of application memory state.\nAn intermediate approach is to recover control and essential state of the application, in addition to data stored on disk, to further minimize disruptions to users.\nA spectrum of approaches are possible.\nIn a VoIP server, for instance, session-based information can be mirrored without mirroring the data flowing through each session.\nMore generally, this points to the need to checkpoint some application state in addition to mirroring data on disk.\nCheckpointing application state involves storing application state either periodically or in an application-aware manner like databases do and then copying it to the remote site.\nOf course, this has the consequence that the application can be restarted remotely at the checkpoint boundary only.\nSimilarly, for storage one may use asynchronous replication with a periodic snapshot ensuring all writes are up-to-date at the remote site at the time of checkpointing.\nSome data loss may occur upon an unanticipated, catastrophic failure, but the recovery point may be fairly small, depending on the frequency of checkpointing application and storage state.\nCoordination between 265 the checkpointing of the application state and the snapshot of storage is key to successful migration while meeting the desired RPOs.\nIncremental checkpointing of application and storage is key to efficiency, and we see existing techniques to achieve this [4, 3, 11].\nFor instance, rather than full application mirroring, a virtualized replica can be maintained as a warm standby-in dormant or hibernating state-enabling a quick switch-over to the previously checkpointed state.\nTo make the switch-over seamless, in addition to replicating data and recovering state, network support is needed.\nSpecifically, on detecting the unavailability of the primary site, the secondary site is made active, and the same mechanism described in Section 3.1 is used to switch traffic over to reach the secondary site via the pre-established tunnel.\nNote that for simplicity of exposition we assume here that the PE that performs the local switch over is not affected by the failure.\nThe approach can however, easily be extended to make use of a switchover at a router deeper in the network.\nThe amount of state and storage that has to be migrated may vary widely from application to application.\nThere may be many situations where, in principle, the server can be stateless.\nFor example, a SIP proxy server may not have any persistent state and the communication between the clients and the proxy server may be using UDP.\nIn such a case, the primary activity to be performed is in the network to move the communication over to the new data center site.\nLittle or no overhead is incurred under normal operation to enable the migration to a new data center.\nFailure recovery involves no data loss and we can deal with near instantaneous, catastrophic failures.\nAs more and more state is involved with the server, more overhead is incurred to checkpoint application state and potentially to take storage snapshots, either periodically or upon application prompting.\nIt also means that the RPO is a function of the interval between checkpoints, when we have to deal with instantaneous failures.\nThe more advanced information we have of an impending failure, the more effective we can be in having the state migrated over to the new data center, so that we can still have a tighter RPO when operations are resumed at the new site.\n4.\nRELATED WORK Prior work on this topic falls into several categories: virtual machine migration, storage replication and network support.\nAt the core of our technique is the ability of encapsulate applications within virtual machines that can be migrated without application downtimes [15].\nMost virtual machine software, such as Xen [8] and VMWare [14] support live migration of VMs that involve extremely short downtimes ranging from tens of milliseconds to a second; details of Xen``s live migration techniques are discussed in [8].\nAs indicated earlier, these techniques assume that migration is being done on a LAN.\nVM migration has also been studied in the Shirako system [10] and for grid environments [17, 19].\nCurrent virtual machine software support a suspend and resume feature that can be used to support WAN migration, but with downtimes [18, 12].\nRecently live WAN migration using IP tunnels was demonstrated in [21], where an IP tunnel is set up from the source to destination server to transparently forward packets to and from the application; we advocate an alternate approach that assumes edge router support.\nIn the context of storage, there exist numerous commercial products that perform replication, such as IBM Extended Remote Copy, HP Continuous Access XP, and EMC RepliStor.\nAn excellent description of these and others, as well as a detailed taxonomy of the different approaches for replication can be found in [11].\nThe Ursa Minor system argues that no single fault model is optimal for all applications and proposed supporting data-type specific selections of fault models and encoding schemes for replication [1].\nRecently, we proposed the notion of semantic-aware replication [13] where the system supports both synchronous and asynchronous replication concurrently and use signals from the file system to determine whether to replicate a particular write synchronously and asynchronously.\nIn the context of network support, our work is related to the RouterFarm approach [2], which makes use of orchestrated network changes to realize near hitless maintenance on provider edge routers.\nIn addition to being in a different application area, our approach differs from the RouterFarm work in two regards.\nFirst, we propose to have the required network changes be triggered by functionality outside of the network (as opposed to network management functions inside the network).\nSecond, due to the stringent timing requirements of live migration, we expect that our approach would require new router functionality (as opposed to being realizable via the existing configuration interfaces).\nFinally, the recovery oriented computing (ROC) work emphasizes recovery from failures rather than failure avoidance [6].\nIn a similar spirit to ROC, we advocate using mechanisms from live VM migration to storage replication to support planned and unplanned outages in data centers (rather than full replication to mask such failures).\n5.\nCONCLUSION A significant concern for Internet-based service providers is the continued operation and availability of services in the face of outages, whether planned or unplanned.\nIn this paper we advocated a cooperative, context-aware approach to data center migration across WANs to deal with outages in a non-disruptive manner.\nWe sought to achieve high availability of data center services in the face of both planned and incidental outages of data center facilities.\nWe advocated using server virtualization technologies to enable the replication and migration of server functions.\nWe proposed new network functions to enable server migration and replication across wide area networks (such as the Internet or a geographically distributed virtual private network), and finally showed the utility of intelligent and dynamic storage replication technology to ensure applications have access to data in the face of outages with very tight recovery point objectives.\n6.\nREFERENCES [1] M. Abd-El-Malek, W. V. Courtright II, C. Cranor, G. R. Ganger, J. Hendricks, A. J. Klosterman, M. Mesnier, M. Prasad, B. Salmon, R. R. Sambasivan, S. Sinnamohideen, J. D. Strunk, E. Thereska, M. Wachs, and J. J. Wylie.\nUrsa minor: versatile cluster-based storage.\nUSENIX Conference on File and Storage Technologies, December 2005.\n[2] Mukesh Agrawal, Susan Bailey, Albert Greenberg, Jorge Pastor, Panagiotis Sebos, Srinivasan Seshan, Kobus van der Merwe, and Jennifer Yates.\nRouterfarm: Towards a dynamic, manageable network edge.\nSIGCOMM Workshop on Internet Network Management (INM), September 2006.\n[3] L. Alvisi.\nUnderstanding the Message Logging Paradigm for Masking Process Crashes.\nPhD thesis, Cornell, January 1996.\n[4] L. Alvisi and K. Marzullo.\nMessage logging: Pessimistic, optimistic, and causal.\nIn Proceedings of the 15th International Conference on Distributed Computing Systems, pages 229-236.\nIEEE Computer Society, June 1995.\n266 [5] Paul Barham, Boris Dragovic, Keir Fraser, Steven Hand, Tim Harris, Alex Ho, Rolf Neugebar, Ian Pratt, and Andrew Warfield.\nXen and the art of virtualization.\nIn the Proceedings of the ACM Symposium on Operating Systems Principles (SOSP), October 2003.\n[6] A. Brown and D. A. Patterson.\nEmbracing failure: A case for recovery-oriented computing (roc).\n2001 High Performance Transaction Processing Symposium, October 2001.\n[7] K. Brown, J. Katcher, R. Walters, and A. Watson.\nSnapmirror and snaprestore: Advances in snapshot technology.\nNetwork Appliance Technical Report TR3043.\nwww.\nne t app.\nc om/t e c h_ l i br ar y/3043.\nht ml .\n[8] C. Clark, K. Fraser, S. Hand, J. Hanse, E. Jul, C. Limpach, I. Pratt, and A. Warfiel.\nLive migration of virtual machines.\nIn Proceedings of NSDI, May 2005.\n[9] Disaster Recovery Journal.\nBusiness continuity glossary.\nht t p: //www.\ndr j .\nc om/gl os s ar y/dr j gl os s ar y. ht ml .\n[10] Laura Grit, David Irwin, , Aydan Yumerefendi, and Jeff Chase.\nVirtual machine hosting for networked clusters: Building the foundations for autonomic orchestration.\nIn In the First International Workshop on Virtualization Technology in Distributed Computing (VTDC), November 2006.\n[11] M. Ji, A. Veitch, and J. Wilkes.\nSeneca: Remote mirroring done write.\nUSENIX 2003 Annual Technical Conference, June 2003.\n[12] M. Kozuch and M. Satyanarayanan.\nInternet suspend and resume.\nIn Proceedings of the Fourth IEEE Workshop on Mobile Computing Systems and Applications, Calicoon, NY, June 2002.\n[13] Xiaotao Liu, Gal Niv, K. K. Ramakrishnan, Prashant Shenoy, and Jacobus Van der Merwe.\nThe case for semantic aware remote replication.\nIn Proc.\n2nd International Workshop on Storage Security and Survivability (StorageSS 2006), Alexandria, VA, October 2006.\n[14] Michael Nelson, Beng-Hong Lim, and Greg Hutchins.\nFast Transparent Migration for Virtual Machines.\nIn USENIX Annual Technical Conference, 2005.\n[15] Mendel Rosenblum and Tal Garfinkel.\nVirtual machine monitors: Current technology and future trends.\nComputer, 38(5):39-47, 2005.\n[16] C. Ruemmler and J. Wilkes.\nUnix disk access patterns.\nProceedings of Winter 1993 USENIX, Jan 1993.\n[17] Paul Ruth, Junghwan Rhee, Dongyan Xu, Rick Kennell, and Sebastien Goasguen.\nAutonomic Live Adaptation of Virtual Computational Environments in a Multi-Domain Infrastructure.\nIn IEEE International Conference on Autonomic Computing (ICAC), June 2006.\n[18] Constantine P. Sapuntzakis, Ramesh Chandra, Ben Pfaff, Jim Chow, Monica S. Lam, and Mendel Rosenblum.\nOptimizing the migration of virtual computers.\nIn Proceedings of the 5th Symposium on Operating Systems Design and Implementation, December 2002.\n[19] A. Sundararaj, A. Gupta, and P. Dinda.\nIncreasing Application Performance in Virtual Environments through Run-time Inference and Adaptation.\nIn Fourteenth International Symposium on High Performance Distributed Computing (HPDC), July 2005.\n[20] Symantec Corporation.\nVeritas Volume Replicator Administrator``s Guide.\nht t p: //f t p. s uppor t .\nve r i t as .\nc om/pub/s uppor t / pr oduc t s /Vol ume _ Re pl i c at or /2%83842.\npdf , 5.0 edition, 2006.\n[21] F. Travostino, P. Daspit, L. Gommans, C. Jog, C. de Laat, J. Mambretti, I. Monga, B. van Oudenaarde, S. Raghunath, and P. Wang.\nSeamless live migration of virtual machines over the man/wan.\nElsevier Future Generations Computer Systems, 2006.\n[22] T. Wood, P. Shenoy, A. Venkataramani, and M. Yousif.\nBlack-box and gray-box strategies for virtual machine migration.\nIn Proceedings of the Usenix Symposium on Networked System Design and Implementation (NSDI), Cambridge, MA, April 2007.\n[23] A xen way to iscsi virtualization?\nhttp://www.internetnews.com/dev-news/article.php/3669246, April 2007.\n267", "lvl-3": "Live Data Center Migration across WANs : A Robust Cooperative Context Aware Approach\nABSTRACT\nA significant concern for Internet-based service providers is the continued operation and availability of services in the face of outages , whether planned or unplanned .\nIn this paper we advocate a cooperative , context-aware approach to data center migration across WANs to deal with outages in a non-disruptive manner .\nWe specifically seek to achieve high availability of data center services in the face of both planned and unanticipated outages of data center facilities .\nWe make use of server virtualization technologies to enable the replication and migration of server functions .\nWe propose new network functions to enable server migration and replication across wide area networks ( e.g. , the Internet ) , and finally show the utility of intelligent and dynamic storage replication technology to ensure applications have access to data in the face of outages with very tight recovery point objectives .\n1 .\nINTRODUCTION\nA significant concern for Internet-based service providers is the continued operation and availability of services in the face of outages , whether planned or unplanned .\nThese concerns are exacerbated by the increased use of the Internet for mission critical business and real-time entertainment applications .\nA relatively minor outage can disrupt and inconvenience a large number of users .\nToday these services are almost exclusively hosted in data centers .\nRecent advances in server virtualization technologies [ 8 , 14 , 22 ] allow for the live migration of services within a local area network\n( LAN ) environment .\nIn the LAN environment , these technologies have proven to be a very effective tool to enable data center management in a non-disruptive fashion .\nNot only can it support planned maintenance events [ 8 ] , but it can also be used in a more dynamic fashion to automatically balance load between the physical servers in a data center [ 22 ] .\nWhen using these technologies in a LAN environment , services execute in a virtual server , and the migration services provided by the underlying virtualization framework allows for a virtual server to be migrated from one physical server to another , without any significant downtime for the service or application .\nIn particular , since the virtual server retains the same network address as before , any ongoing network level interactions are not disrupted .\nSimilarly , in a LAN environment , storage requirements are normally met via either network attached storage ( NAS ) or via a storage area network ( SAN ) which is still reachable from the new physical server location to allow for continued storage access .\nUnfortunately in a wide area environment ( WAN ) , live server migration is not as easily achievable for two reasons : First , live migration requires the virtual server to maintain the same network address so that from a network connectivity viewpoint the migrated server is indistinguishable from the original .\nWhile this is fairly easily achieved in a shared LAN environment , no current mechanisms are available to efficiently achieve the same feat in a WAN environment .\nSecond , while fairly sophisticated remote replication mechanisms have been developed in the context of disaster recovery [ 20 , 7 , 11 ] , these mechanisms are ill suited to live data center migration , because in general the available technologies are unaware of application/service level semantics .\nIn this paper we outline a design for live service migration across WANs .\nOur design makes use of existing server virtualization technologies and propose network and storage mechanisms to facilitate migration across a WAN .\nThe essence of our approach is cooperative , context aware migration , where a migration management system orchestrates the data center migration across all three subsystems involved , namely the server platforms , the wide area network and the disk storage system .\nWhile conceptually similar in nature to the LAN based work described above , using migration technologies across a wide area network presents unique challenges and has to our knowledge not been achieved .\nOur main contribution is the design of a framework that will allow the migration across a WAN of all subsystems involved with enabling data center services .\nWe describe new mechanisms as well as extensions to existing technologies to enable this and outline the cooperative , context aware functionality needed across the different subsystems to enable this .\n2 .\nLIVE DATA CENTER MIGRATION ACROSS WANS\n2.1 Live Virtual Server Migration\n2.2 Networking Requirements\n2.3 Storage Replication Requirements\n3 .\nWAN MIGRATION SCENARIOS\n3.1 Maintenance Outages\n3.2 Unplanned Outages\n4 .\nRELATED WORK\nPrior work on this topic falls into several categories : virtual machine migration , storage replication and network support .\nAt the core of our technique is the ability of encapsulate applications within virtual machines that can be migrated without application downtimes [ 15 ] .\nMost virtual machine software , such as Xen [ 8 ] and VMWare [ 14 ] support `` live '' migration of VMs that involve extremely short downtimes ranging from tens of milliseconds to a second ; details of Xen 's live migration techniques are discussed in [ 8 ] .\nAs indicated earlier , these techniques assume that migration is being done on a LAN .\nVM migration has also been studied in the Shirako system [ 10 ] and for grid environments [ 17 , 19 ] .\nCurrent virtual machine software support a suspend and resume feature that can be used to support WAN migration , but with downtimes [ 18 , 12 ] .\nRecently live WAN migration using IP tunnels was demonstrated in [ 21 ] , where an IP tunnel is set up from the source to destination server to transparently forward packets to and from the application ; we advocate an alternate approach that assumes edge router support .\nIn the context of storage , there exist numerous commercial products that perform replication , such as IBM Extended Remote Copy , HP Continuous Access XP , and EMC RepliStor .\nAn excellent description of these and others , as well as a detailed taxonomy of the different approaches for replication can be found in [ 11 ] .\nThe Ursa Minor system argues that no single fault model is optimal for all applications and proposed supporting data-type specific selections of fault models and encoding schemes for replication [ 1 ] .\nRecently , we proposed the notion of semantic-aware replication [ 13 ] where the system supports both synchronous and asynchronous replication concurrently and use `` signals '' from the file system to determine whether to replicate a particular write synchronously and asynchronously .\nIn the context of network support , our work is related to the RouterFarm approach [ 2 ] , which makes use of orchestrated network changes to realize near hitless maintenance on provider edge routers .\nIn addition to being in a different application area , our approach differs from the RouterFarm work in two regards .\nFirst , we propose to have the required network changes be triggered by functionality outside of the network ( as opposed to network management functions inside the network ) .\nSecond , due to the stringent timing requirements of live migration , we expect that our approach would require new router functionality ( as opposed to being realizable via the existing configuration interfaces ) .\nFinally , the recovery oriented computing ( ROC ) work emphasizes recovery from failures rather than failure avoidance [ 6 ] .\nIn a similar spirit to ROC , we advocate using mechanisms from live VM migration to storage replication to support planned and unplanned outages in data centers ( rather than full replication to mask such failures ) .\n5 .\nCONCLUSION\nA significant concern for Internet-based service providers is the continued operation and availability of services in the face of outages , whether planned or unplanned .\nIn this paper we advocated a cooperative , context-aware approach to data center migration across WANs to deal with outages in a non-disruptive manner .\nWe sought to achieve high availability of data center services in the face of both planned and incidental outages of data center facilities .\nWe advocated using server virtualization technologies to enable the replication and migration of server functions .\nWe proposed new network functions to enable server migration and replication across wide area networks ( such as the Internet or a geographically distributed virtual private network ) , and finally showed the utility of intelligent and dynamic storage replication technology to ensure applications have access to data in the face of outages with very tight recovery point objectives .", "lvl-4": "Live Data Center Migration across WANs : A Robust Cooperative Context Aware Approach\nABSTRACT\nA significant concern for Internet-based service providers is the continued operation and availability of services in the face of outages , whether planned or unplanned .\nIn this paper we advocate a cooperative , context-aware approach to data center migration across WANs to deal with outages in a non-disruptive manner .\nWe specifically seek to achieve high availability of data center services in the face of both planned and unanticipated outages of data center facilities .\nWe make use of server virtualization technologies to enable the replication and migration of server functions .\nWe propose new network functions to enable server migration and replication across wide area networks ( e.g. , the Internet ) , and finally show the utility of intelligent and dynamic storage replication technology to ensure applications have access to data in the face of outages with very tight recovery point objectives .\n1 .\nINTRODUCTION\nA significant concern for Internet-based service providers is the continued operation and availability of services in the face of outages , whether planned or unplanned .\nA relatively minor outage can disrupt and inconvenience a large number of users .\nToday these services are almost exclusively hosted in data centers .\nRecent advances in server virtualization technologies [ 8 , 14 , 22 ] allow for the live migration of services within a local area network\n( LAN ) environment .\nIn the LAN environment , these technologies have proven to be a very effective tool to enable data center management in a non-disruptive fashion .\nNot only can it support planned maintenance events [ 8 ] , but it can also be used in a more dynamic fashion to automatically balance load between the physical servers in a data center [ 22 ] .\nWhen using these technologies in a LAN environment , services execute in a virtual server , and the migration services provided by the underlying virtualization framework allows for a virtual server to be migrated from one physical server to another , without any significant downtime for the service or application .\nIn particular , since the virtual server retains the same network address as before , any ongoing network level interactions are not disrupted .\nSimilarly , in a LAN environment , storage requirements are normally met via either network attached storage ( NAS ) or via a storage area network ( SAN ) which is still reachable from the new physical server location to allow for continued storage access .\nUnfortunately in a wide area environment ( WAN ) , live server migration is not as easily achievable for two reasons : First , live migration requires the virtual server to maintain the same network address so that from a network connectivity viewpoint the migrated server is indistinguishable from the original .\nSecond , while fairly sophisticated remote replication mechanisms have been developed in the context of disaster recovery [ 20 , 7 , 11 ] , these mechanisms are ill suited to live data center migration , because in general the available technologies are unaware of application/service level semantics .\nIn this paper we outline a design for live service migration across WANs .\nOur design makes use of existing server virtualization technologies and propose network and storage mechanisms to facilitate migration across a WAN .\nThe essence of our approach is cooperative , context aware migration , where a migration management system orchestrates the data center migration across all three subsystems involved , namely the server platforms , the wide area network and the disk storage system .\nWhile conceptually similar in nature to the LAN based work described above , using migration technologies across a wide area network presents unique challenges and has to our knowledge not been achieved .\nOur main contribution is the design of a framework that will allow the migration across a WAN of all subsystems involved with enabling data center services .\nWe describe new mechanisms as well as extensions to existing technologies to enable this and outline the cooperative , context aware functionality needed across the different subsystems to enable this .\n4 .\nRELATED WORK\nPrior work on this topic falls into several categories : virtual machine migration , storage replication and network support .\nAt the core of our technique is the ability of encapsulate applications within virtual machines that can be migrated without application downtimes [ 15 ] .\nAs indicated earlier , these techniques assume that migration is being done on a LAN .\nVM migration has also been studied in the Shirako system [ 10 ] and for grid environments [ 17 , 19 ] .\nCurrent virtual machine software support a suspend and resume feature that can be used to support WAN migration , but with downtimes [ 18 , 12 ] .\nRecently live WAN migration using IP tunnels was demonstrated in [ 21 ] , where an IP tunnel is set up from the source to destination server to transparently forward packets to and from the application ; we advocate an alternate approach that assumes edge router support .\nAn excellent description of these and others , as well as a detailed taxonomy of the different approaches for replication can be found in [ 11 ] .\nThe Ursa Minor system argues that no single fault model is optimal for all applications and proposed supporting data-type specific selections of fault models and encoding schemes for replication [ 1 ] .\nIn the context of network support , our work is related to the RouterFarm approach [ 2 ] , which makes use of orchestrated network changes to realize near hitless maintenance on provider edge routers .\nIn addition to being in a different application area , our approach differs from the RouterFarm work in two regards .\nSecond , due to the stringent timing requirements of live migration , we expect that our approach would require new router functionality ( as opposed to being realizable via the existing configuration interfaces ) .\nIn a similar spirit to ROC , we advocate using mechanisms from live VM migration to storage replication to support planned and unplanned outages in data centers ( rather than full replication to mask such failures ) .\n5 .\nCONCLUSION\nA significant concern for Internet-based service providers is the continued operation and availability of services in the face of outages , whether planned or unplanned .\nIn this paper we advocated a cooperative , context-aware approach to data center migration across WANs to deal with outages in a non-disruptive manner .\nWe sought to achieve high availability of data center services in the face of both planned and incidental outages of data center facilities .\nWe advocated using server virtualization technologies to enable the replication and migration of server functions .\nWe proposed new network functions to enable server migration and replication across wide area networks ( such as the Internet or a geographically distributed virtual private network ) , and finally showed the utility of intelligent and dynamic storage replication technology to ensure applications have access to data in the face of outages with very tight recovery point objectives .", "lvl-2": "Live Data Center Migration across WANs : A Robust Cooperative Context Aware Approach\nABSTRACT\nA significant concern for Internet-based service providers is the continued operation and availability of services in the face of outages , whether planned or unplanned .\nIn this paper we advocate a cooperative , context-aware approach to data center migration across WANs to deal with outages in a non-disruptive manner .\nWe specifically seek to achieve high availability of data center services in the face of both planned and unanticipated outages of data center facilities .\nWe make use of server virtualization technologies to enable the replication and migration of server functions .\nWe propose new network functions to enable server migration and replication across wide area networks ( e.g. , the Internet ) , and finally show the utility of intelligent and dynamic storage replication technology to ensure applications have access to data in the face of outages with very tight recovery point objectives .\n1 .\nINTRODUCTION\nA significant concern for Internet-based service providers is the continued operation and availability of services in the face of outages , whether planned or unplanned .\nThese concerns are exacerbated by the increased use of the Internet for mission critical business and real-time entertainment applications .\nA relatively minor outage can disrupt and inconvenience a large number of users .\nToday these services are almost exclusively hosted in data centers .\nRecent advances in server virtualization technologies [ 8 , 14 , 22 ] allow for the live migration of services within a local area network\n( LAN ) environment .\nIn the LAN environment , these technologies have proven to be a very effective tool to enable data center management in a non-disruptive fashion .\nNot only can it support planned maintenance events [ 8 ] , but it can also be used in a more dynamic fashion to automatically balance load between the physical servers in a data center [ 22 ] .\nWhen using these technologies in a LAN environment , services execute in a virtual server , and the migration services provided by the underlying virtualization framework allows for a virtual server to be migrated from one physical server to another , without any significant downtime for the service or application .\nIn particular , since the virtual server retains the same network address as before , any ongoing network level interactions are not disrupted .\nSimilarly , in a LAN environment , storage requirements are normally met via either network attached storage ( NAS ) or via a storage area network ( SAN ) which is still reachable from the new physical server location to allow for continued storage access .\nUnfortunately in a wide area environment ( WAN ) , live server migration is not as easily achievable for two reasons : First , live migration requires the virtual server to maintain the same network address so that from a network connectivity viewpoint the migrated server is indistinguishable from the original .\nWhile this is fairly easily achieved in a shared LAN environment , no current mechanisms are available to efficiently achieve the same feat in a WAN environment .\nSecond , while fairly sophisticated remote replication mechanisms have been developed in the context of disaster recovery [ 20 , 7 , 11 ] , these mechanisms are ill suited to live data center migration , because in general the available technologies are unaware of application/service level semantics .\nIn this paper we outline a design for live service migration across WANs .\nOur design makes use of existing server virtualization technologies and propose network and storage mechanisms to facilitate migration across a WAN .\nThe essence of our approach is cooperative , context aware migration , where a migration management system orchestrates the data center migration across all three subsystems involved , namely the server platforms , the wide area network and the disk storage system .\nWhile conceptually similar in nature to the LAN based work described above , using migration technologies across a wide area network presents unique challenges and has to our knowledge not been achieved .\nOur main contribution is the design of a framework that will allow the migration across a WAN of all subsystems involved with enabling data center services .\nWe describe new mechanisms as well as extensions to existing technologies to enable this and outline the cooperative , context aware functionality needed across the different subsystems to enable this .\n2 .\nLIVE DATA CENTER MIGRATION ACROSS WANS\nThree essential subsystems are involved with hosting services in a data center : First , the servers host the application or service logic .\nSecond , services are normally hosted in a data center to provide shared access through a network , either the Internet or virtual private networks ( VPNs ) .\nFinally , most applications require disk storage for storing data and the amount of disk space and the frequency of access varies greatly between different services/applications .\nDisruptions , failures , or in general , outages of any kind of any of these components will cause service disruption .\nFor this reason , prior work and current practices have addressed the robustness of individual components .\nFor example , data centers typically have multiple network connections and redundant LAN devices to ensure redundancy at the networking level .\nSimilarly , physical servers are being designed with redundant hot-swappable components ( disks , processor blades , power supplies etc ) .\nFinally , redundancy at the storage level can be provided through sophisticated data mirroring technologies .\nThe focus of our work , however , is on the case where such local redundancy mechanisms are not sufficient .\nSpecifically , we are interested in providing service availability when the data center as a whole becomes unavailable , for example because of data center wide maintenance operations , or because of catastrophic events .\nAs such , our basic approach is to migrate services between data centers across the wide are network ( WAN ) .\nBy necessity , moving or migrating services from one data center to another needs to consider all three of these components .\nHistorically , such migration has been disruptive in nature , requiring downtime of the actual services involved , or requiring heavy weight replication techniques .\nIn the latter case concurrently running replicas of a service can be made available thus allowing a subset of the service to be migrated or maintained without impacting the service as a whole .\nWe argue that these existing mechanisms are inadequate to meet the needs of network-based services , including real-time services , in terms of continuous availability and operation .\nInstead , we advocate an approach where server , network and storage subsystems cooperate and coordinate actions , in a manner that is cognizant of the service context in order to realize seamless migration across wide area networks .\nIn this section we briefly describe the technical building blocks that would enable our approach .\nAs outlined below , some of these building blocks exist , or exist in part , while in other cases we use the desire for high availability of services as the driver for the changes we are proposing .\n2.1 Live Virtual Server Migration\nThe main enabler for our approach is the live server migration capabilities that have been developed in the context of server virtualization in recent years [ 5 , 8 ] .\nIn this approach an entire running operating system ( including any active applications ) executing as a virtual server is being transfered from one physical machine to another .\nSince the virtual server is migrated in its entirety , both application and kernel level state gets migrated , including any state associated with ongoing network connections .\nAssuming that network level reachability to the virtual server 's network addresses are maintained after the migration , the implication is that applications executing in the virtual server experience very little downtime ( in the order of tens to hundreds of milliseconds ) and ongoing network connections remain intact .\nIn order to maintain network level reachability , the IP address ( es ) associated with the virtual server has to be reachable at the physical server where the virtual server is migrated to .\nIn a LAN environment this is achieved either by issuing an unsolicited ARP reply to establish the binding between the new MAC address and the IP address , or by relying on layer-two technologies to allow the virtual server to reuse its ( old ) MAC address [ 8 ] .\nBecause of the difficulty of moving network level ( i.e. , IP addresses ) in a routed non-LAN environment , use of live server migration as a management tool has been limited to the LAN environments [ 22 ] .\nHowever , virtual server migration across the wide area will also be an attractive tool , specifically to deal with outages , and therefore propose networking mechanisms to enable this .\nIf disk storage needs are being met with network attached storage ( NAS ) , the storage becomes just another network based application and can therefore be addressed in the same way with LAN based migration [ 8 ] .\nModern virtualization environments also include support for other forms of ( local ) storage including storage area networks ( SANs ) [ 23 ] .\nHowever , since we propose to use WAN server migration as a means to deal with complete data center outages , these mechanisms are inadequate for our purposes and below we propose extension to remote replication technologies which can work in concert with server migration to minimize service downtime .\n2.2 Networking Requirements\nFrom the discussion above , a key requirement for live server migration across a WAN is the ability to have the IP address ( es ) of the virtual server be reachable at the new data center location immediately after the migration has completed .\nThis presents a significant challenge for a number of reasons .\nFirst , despite decades of work in this area , IP address mobility remains an unresolved problem that is typically only addressed at manual configuration time scales .\nThe second challenge comes from the fact that current routing protocols are well known to have convergence issues which is ill suited to the time constraints imposed by live migration .\nThird , in today 's WAN networking environment connectivity changes are typically initiated , and controlled , by network operators or network management systems .\nAgain this is poorly suited to WAN server migration where it is essential that the migration software , which is closely monitoring the status of the server migration process , initiate this change at the appropriate time .\nOur approach to addressing the networking requirements for live WAN migration builds on the observations that not all networking changes in this approach are time critical and further that instantaneous changes are best achieved in a localized manner .\nSpecifically , in our solution , described in detail in Section 3 , we allow the migration software to initiate the necessary networking changes as soon as the need for migration has been identified .\nWe make use of tunneling technologies during this initial phase to preemptively establish connectivity between the data centers involved .\nOnce server migration is complete , the migration software initiates a local change to direct traffic towards the new data center via the tunnel .\nSlower time scale network changes then phase out this local network connectivity change for a more optimal network wide path to the new data center .\n2.3 Storage Replication Requirements\nData availability is typically addressed by replicating business data on a local/primary storage system , to some remote location from where it can be accessed .\nFrom a business/usability point of view , such remote replication is driven by two metrics [ 9 ] .\nFirst\nis the recovery-point-objective which is the consistent data point to which data can be restored after a disaster .\nSecond is the recoverytime-objective which is the time it takes to recover to that consistent data point after a disaster [ 13 ] .\nRemote replication can be broadly classified into the following two categories : Synchronous replication : every data block written to a local storage system is replicated to the remote location before the local write operation returns .\nAsynchronous replication : in this case the local and remote storage systems are allowed to diverge .\nThe amount of divergence between the local and remote copies is typically bounded by either a certain amount of data , or by a certain amount of time .\nSynchronous replication is normally recommended for applications , such as financial databases , where consistency between local and remote storage systems is a high priority .\nHowever , these desirable properties come at a price .\nFirst , because every data block needs to be replicated remotely , synchronous replication systems can not benefit from any local write coalescing of data if the same data blocks are written repeatedly [ 16 ] .\nSecond , because data have to be copied to the remote location before the write operation returns , synchronous replication has a direct performance impact on the application , since both lower throughput and increased latency of the path between the primary and the remote systems are reflected in the time it takes for the local disk write to complete .\nAn alternative is to use asynchronous replication .\nHowever , because the local and remote systems are allowed to diverge , asynchronous replication always involves some data loss in the event of a failure of the primary system .\nBut , because write operations can be batched and pipelined , asynchronous replication systems can move data across the network in a much more efficient manner than synchronous replication systems .\nFor WAN live server migration we seek a more flexible replication system where the mode can be dictated by the migration semantics .\nSpecifically , to support live server migration we propose a remote replication system where the initial transfer of data between the data centers is performed via asynchronous replication to benefit from the efficiency of that mode of operation .\nWhen the bulk of the data have been transfered in this manner , replication switches to synchronous replication in anticipation of the completion of the server migration step .\nThe final server migration step triggers a simultaneous switch-over to the storage system at the new data center .\nIn this manner , when the virtual server starts executing in the new data center , storage requirements can be locally met .\n3 .\nWAN MIGRATION SCENARIOS\nIn this section we illustrate how our cooperative , context aware approach can combine the technical building blocks described in the previous section to realize live server migration across a wide area network .\nWe demonstrate how the coordination of server virtualization and migration technologies , the storage replication subsystem and the network can achieve live migration of the entire data center across the WAN .\nWe utilize different scenarios to demonstrate our approach .\nIn Section 3.1 we outline how our approach can be used to achieve the safe live migration of a data center when planned maintenance events are handled .\nIn Section 3.2 we show the use of live server migration to mitigate the effects of unplanned outages or failures .\n3.1 Maintenance Outages\nWe deal with maintenance outages in two parts .\nFirst , we consider the case where the service has no ( or very limited ) storage requirements .\nThis might for example be the case with a network element such as a voice-over-IP ( VoIP ) gateway .\nSecond , we deal with the more general case where the service also requires the migration of data storage to the new data center .\nWithout Requiring Storage to be Migrated : Without storage to be replicated , the primary components that we need to coordinate are the server migration and network `` mobility '' .\nFigure 1 shows the environment where the application running in a virtual server `` VS '' has to be moved from a physical server in data center A to a physical server in data center B. Prior to the maintenance event , the coordinating `` migration management system '' ( MMS ) would signal to both the server management system as well as the network that a migration is imminent .\nThe server management system would initiate the migration of the virtual server from physical server `` a '' ( ) to physical server `` b '' ( ) .\nAfter an initial bulk state transfer as `` preparation for migration '' , the server management system will mirror any state changes between the two virtual servers .\nSimilarly , for the network part , based on the signal received from the MMS , the service provider edge ( ) router will initiate a number of steps to prepare for the migration .\nSpecifically , as shown in Figure 1 ( b ) , the migration system will cause the network to create a tunnel between and which will be used subsequently to transfer data destined to VS to data center B .\nWhen the MMS determines a convenient point to quiesce the VS , another signal is sent to both the server management system and the network .\nFor the server management system , this signal will indicate the final migration of the VS from data center A to data center B , i.e. , after this the VS will become active in data center B. For the network , this second signal enables the network data path to switchover locally at to the remote data center .\nSpecifically , from this point in time , any traffic destined for the virtual server address that arrives at will be switched onto the tunnel to for delivery to data center B. Note that at this point , from a server perspective the migration is complete as the VS is now active in data center B. However , traffic is sub-optimally flowing first to and then across the tunnel to .\nTo rectify this situation another networking step is involved .\nSpecifically , starts to advertise a more preferred route to reach VS , than the route currently being advertised by .\nIn this manner , as ingress PEs to the network ( to in Figure 1 ) receive the more preferred route , traffic will start to flow to directly and the tunnel between and can be torn down leading to the final state shown in Figure 1 ( c ) .\nRequiring Storage Migration : When storage has to also be replicated , it is critical that we achieve the right balance between performance ( impact on the application ) and the recovery point or data loss when the switchover occurs to the remote data center .\nTo achieve this , we allow the storage to be replicated asynchronously , prior to any initiation of the maintenance event , or , assuming the amount of data to be transfered is relatively small , asynchronous replication can be started in anticipation of a migration that is expected to happen shortly .\nAsynchronous replication during this initial phase allows for the application to see no performance impact .\nHowever , when the maintenance event is imminent , the MMS would signal to the replication system to switch from asynchronous replication to synchronous replication to ensure that there is no loss of data during migration .\nWhen data is being replicated synchronously , there will be a performance impact on the application .\nFigure 1 : Live server migration across a WAN\nThis requires us to keep the exposure to the amount of time we replicate on a synchronous basis to a minimum .\nWhen the MMS signals to the storage system the requirement to switch to synchronous replication , the storage system completes all the pending asynchronous operations and then proceeds to perform all the subsequent writes by synchronously replicating it to the remote data center .\nThus , between the server migration and synchronous replication , both the application state and all the storage operations are mirrored at the two environments in the two data centers .\nWhen all the pending write operations are copied over , then as in the previous case , we quiesce the application and the network is signaled to switch traffic over to the remote data center .\nFrom this point on , both storage and server migration operations are complete and activated in data center B .\nAs above , the network state still needs to be updated to ensure optimal data flow directly to data center B. Note that while we have described the live server migration process as involving the service provider for the networking part , it is possible for a data center provider to perform a similar set of functions without involving the service provider .\nSpecifically , by creating a tunnel between the customer edge ( CE ) routers in the data center , and performing local switching on the appropriate CE , rather than on the PE , the data center provider can realize the same functionality .\n3.2 Unplanned Outages\nWe propose to also use cooperative , context aware migration to deal with unplanned data center outages .\nThere are multiple considerations that go into managing data center operations to plan and overcome failures through migration .\nSome of these are : ( 1 ) amount of overhead under normal operation to overcome anticipated failures ; ( 2 ) amount of data loss affordable ( recovery point objective - RPO ) ; ( 3 ) amount of state that has to be migrated ; and ( 4 ) time available from anticipated failure to occurrence of event .\nAt the one extreme , one might incur the overhead of completely mirroring the application at the remote site .\nThis has the consequence of both incurring processing and network overhead under normal operation as well as impacting application performance ( latency and throughput ) throughout .\nThe other extreme is to only ensure data recovery and to start a new copy of the application at the remote site after an outage .\nIn this case , application memory state such as ongoing sessions are lost , but data stored on disk is replicated and available in a consistent state .\nNeither this hot standby nor the cold standby approach described are desirable due to the overhead or the loss of application memory state .\nAn intermediate approach is to recover control and essential state of the application , in addition to data stored on disk , to further minimize disruptions to users .\nA spectrum of approaches are possible .\nIn a VoIP server , for instance , session-based information can be mirrored without mirroring the data flowing through each session .\nMore generally , this points to the need to checkpoint some application state in addition to mirroring data on disk .\nCheckpointing application state involves storing application state either periodically or in an application-aware manner like databases do and then copying it to the remote site .\nOf course , this has the consequence that the application can be restarted remotely at the checkpoint boundary only .\nSimilarly , for storage one may use asynchronous replication with a periodic snapshot ensuring all writes are up-to-date at the remote site at the time of checkpointing .\nSome data loss may occur upon an unanticipated , catastrophic failure , but the recovery point may be fairly small , depending on the frequency of checkpointing application and storage state .\nCoordination between\nthe checkpointing of the application state and the snapshot of storage is key to successful migration while meeting the desired RPOs .\nIncremental checkpointing of application and storage is key to efficiency , and we see existing techniques to achieve this [ 4 , 3 , 11 ] .\nFor instance , rather than full application mirroring , a virtualized replica can be maintained as a `` warm standby '' -- in dormant or hibernating state -- enabling a quick switch-over to the previously checkpointed state .\nTo make the switch-over seamless , in addition to replicating data and recovering state , network support is needed .\nSpecifically , on detecting the unavailability of the primary site , the secondary site is made active , and the same mechanism described in Section 3.1 is used to switch traffic over to reach the secondary site via the pre-established tunnel .\nNote that for simplicity of exposition we assume here that the PE that performs the local switch over is not affected by the failure .\nThe approach can however , easily be extended to make use of a switchover at a router `` deeper '' in the network .\nThe amount of state and storage that has to be migrated may vary widely from application to application .\nThere may be many situations where , in principle , the server can be stateless .\nFor example , a SIP proxy server may not have any persistent state and the communication between the clients and the proxy server may be using UDP .\nIn such a case , the primary activity to be performed is in the network to move the communication over to the new data center site .\nLittle or no overhead is incurred under normal operation to enable the migration to a new data center .\nFailure recovery involves no data loss and we can deal with near instantaneous , catastrophic failures .\nAs more and more state is involved with the server , more overhead is incurred to checkpoint application state and potentially to take storage snapshots , either periodically or upon application prompting .\nIt also means that the RPO is a function of the interval between checkpoints , when we have to deal with instantaneous failures .\nThe more advanced information we have of an impending failure , the more effective we can be in having the state migrated over to the new data center , so that we can still have a tighter RPO when operations are resumed at the new site .\n4 .\nRELATED WORK\nPrior work on this topic falls into several categories : virtual machine migration , storage replication and network support .\nAt the core of our technique is the ability of encapsulate applications within virtual machines that can be migrated without application downtimes [ 15 ] .\nMost virtual machine software , such as Xen [ 8 ] and VMWare [ 14 ] support `` live '' migration of VMs that involve extremely short downtimes ranging from tens of milliseconds to a second ; details of Xen 's live migration techniques are discussed in [ 8 ] .\nAs indicated earlier , these techniques assume that migration is being done on a LAN .\nVM migration has also been studied in the Shirako system [ 10 ] and for grid environments [ 17 , 19 ] .\nCurrent virtual machine software support a suspend and resume feature that can be used to support WAN migration , but with downtimes [ 18 , 12 ] .\nRecently live WAN migration using IP tunnels was demonstrated in [ 21 ] , where an IP tunnel is set up from the source to destination server to transparently forward packets to and from the application ; we advocate an alternate approach that assumes edge router support .\nIn the context of storage , there exist numerous commercial products that perform replication , such as IBM Extended Remote Copy , HP Continuous Access XP , and EMC RepliStor .\nAn excellent description of these and others , as well as a detailed taxonomy of the different approaches for replication can be found in [ 11 ] .\nThe Ursa Minor system argues that no single fault model is optimal for all applications and proposed supporting data-type specific selections of fault models and encoding schemes for replication [ 1 ] .\nRecently , we proposed the notion of semantic-aware replication [ 13 ] where the system supports both synchronous and asynchronous replication concurrently and use `` signals '' from the file system to determine whether to replicate a particular write synchronously and asynchronously .\nIn the context of network support , our work is related to the RouterFarm approach [ 2 ] , which makes use of orchestrated network changes to realize near hitless maintenance on provider edge routers .\nIn addition to being in a different application area , our approach differs from the RouterFarm work in two regards .\nFirst , we propose to have the required network changes be triggered by functionality outside of the network ( as opposed to network management functions inside the network ) .\nSecond , due to the stringent timing requirements of live migration , we expect that our approach would require new router functionality ( as opposed to being realizable via the existing configuration interfaces ) .\nFinally , the recovery oriented computing ( ROC ) work emphasizes recovery from failures rather than failure avoidance [ 6 ] .\nIn a similar spirit to ROC , we advocate using mechanisms from live VM migration to storage replication to support planned and unplanned outages in data centers ( rather than full replication to mask such failures ) .\n5 .\nCONCLUSION\nA significant concern for Internet-based service providers is the continued operation and availability of services in the face of outages , whether planned or unplanned .\nIn this paper we advocated a cooperative , context-aware approach to data center migration across WANs to deal with outages in a non-disruptive manner .\nWe sought to achieve high availability of data center services in the face of both planned and incidental outages of data center facilities .\nWe advocated using server virtualization technologies to enable the replication and migration of server functions .\nWe proposed new network functions to enable server migration and replication across wide area networks ( such as the Internet or a geographically distributed virtual private network ) , and finally showed the utility of intelligent and dynamic storage replication technology to ensure applications have access to data in the face of outages with very tight recovery point objectives ."} {"id": "C-34", "title": "", "abstract": "", "keyphrases": ["sensor network", "kei pool", "kei predistribut", "hierarch hypercub model", "secur", "pairwis kei establish algorithm", "cluster-base distribut model", "polynomi kei", "encrypt", "node code", "high fault-toler", "pairwis kei"], "prmu": [], "lvl-1": "Researches on Scheme of Pairwise Key Establishment for DistributedSensor Networks Wang Lei Fujian University Technology Fuzhou,Funjian, PR.China (+)86-591-8755-9001, 350014 wanglei_hn@hn165.com Chen Zhi-ping Fujian University Technology Fuzhou,Funjian, PR.China (+)86-591-8755-9001, 350014 jt_zpchen@hnu.cn Jiang Xin-hua Fujian University Technology Fuzhou,Funjian, PR.China (+)86-591-8755-9001, 350014 xhj@csu.edu.cn ABSTRACT Security schemes of pairwise key establishment, which enable sensors to communicate with each other securely, play a fundamental role in research on security issue in wireless sensor networks.\nA new kind of cluster deployed sensor networks distribution model is presented, and based on which, an innovative Hierarchical Hypercube model - H(k,u,m,v,n) and the mapping relationship between cluster deployed sensor networks and the H(k,u,m,v,n) are proposed.\nBy utilizing nice properties of H(k,u,m,v,n) model, a new general framework for pairwise key predistribution and a new pairwise key establishment algorithm are designed, which combines the idea of KDC(Key Distribution Center) and polynomial pool schemes.\nFurthermore, the working performance of the newly proposed pairwise key establishment algorithm is seriously inspected.\nTheoretic analysis and experimental figures show that the new algorithm has better performance and provides higher possibilities for sensor to establish pairwise key, compared with previous related works.\nCategories and Subject Descriptors C.2.4 [Computer-Communication-Networks]: Distributed Systems-Distributed applications.\nGeneral Terms: Security.\n1.\nINTRODUCTION Security communication is an important requirement in many sensor network applications, so shared secret keys are used between communicating nodes to encrypt data.\nAs one of the most fundamental security services, pairwise key establishment enables the sensor nodes to communicate securely with each other using cryptographic techniques.\nHowever, due to the sensor nodes' limited computational capabilities, battery energy, and available memory, it is not feasible for them to use traditional pairwise key establishment techniques such as public key cryptography and key distribution center (KDC).\nSeveral alternative approaches have been developed recently to perform pairwise key establishment on resource-constrained sensor networks without involving the use of traditional cryptography [14].\nEschenauer and Gligor proposed a basic probabilistic key predistribution scheme for pairwise key establishment [1].\nIn the scheme, each sensor node randomly picks a set of keys from a key pool before the deployment so that any two of the sensor nodes have a certain probability to share at least one common key.\nChan et al. further extended this idea and presented two key predistribution schemes: a q-composite key pre-distribution scheme and a random pairwise keys scheme.\nThe q-composite scheme requires any two sensors share at least q pre-distributed keys.\nThe random scheme randomly picks pair of sensors and assigns each pair a unique random key [2].\nInspired by the studies above and the polynomial-based key pre-distribution protocol [3], Liu et al. further developed the idea addressed in the previous works and proposed a general framework of polynomial pool-based key predistribution [4].\nThe basic idea can be considered as the combination of the polynomial-based key pre-distribution and the key pool idea used in [1]] and [2].\nBased on such a framework, they presented two pairwise key pre-distribution schemes: a random subset assignment scheme and a grid-based scheme.\nA polynomial pool is used in those schemes, instead of using a key pool in the previous techniques.\nThe random subset assignment scheme assigns each sensor node the secrets generated from a random subset of polynomials in the polynomial pool.\nThe gridbased scheme associates polynomials with the rows and the columns of an artificial grid, assigns each sensor node to a unique coordinate in the grid, and gives the node the secrets generated from the corresponding row and column polynomials.\nBased on this grid, each sensor node can then identify whether it can directly establish a pairwise key with another node, and if not, what intermediate nodes it can contact to indirectly establish the pairwise key.\nA similar approach to those schemes described by Liu et al was independently developed by Du et a. [5].\nRather than on Blundo's scheme their approach is based on Blom's scheme [6].\nIn some cases, it is essentially equivalent to the one in [4].\nAll of those schemes above improve the security over the basic probabilistic key pre-distribution scheme.\nHowever, the pairwise key establishment problem in sensor networks is still not well solved.\nFor the basic probabilistic and the q-composite key predistribution schemes, as the number of compromised nodes increases, the fraction of affected pairwise keys increases quickly.\nAs a result, a small number of compromised nodes may affect a large fraction of pairwise keys [3].\nThough the random pairwise keys scheme doses not suffer from the above security problem, it incurs a high memory overhead, which increases linearly with the number of nodes in the network if the level of security is kept constant [2][4].\nFor the random subset assignment scheme, it suffers higher communication and computation overheads.\nIn 2004, Liu proposed a new hypercube-based pairwise key predistribution scheme [7], which extends the grid-based scheme from a two dimensional grid to a multi-dimensional hypercube.\nThe analysis shows that hypercube-based scheme keeps some attractive properties of the grid-based scheme, including the guarantee of establishing pairwise keys and the resilience to node compromises.\nAlso, when perfect security against node compromise is required, the hypercube-based scheme can support a larger network by adding more dimensions instead of increasing the storage overhead on sensor nodes.\nThough hypercube-based scheme (we consider the grid-based scheme is a special case of hypercube-based scheme) has many attractive properties, it requires any two nodes in sensor networks can communication directly with each other.\nThis strong assumption is impractical in most of the actual applications of the sensor networks.\nIn this paper, we present a kind of new cluster-based distribution model of sensor networks, and for which, we propose a new pairwise key pre-distribution scheme.\nThe main contributions of this paper are as follows: Combining the deployment knowledge of sensor networks and the polynomial pool-based key pre-distribution, we setup a clusterbased topology that is practical with the real deployment of sensor networks.\nBased on the topology, we propose a novel cluster distribution based hierarchical hypercube model to establish the pairwise key.\nThe key contribution is that our scheme dose not require the assumption of all nodes can directly communicate with each other as the previous schemes do, and it still maintains high probability of key establishment, low memory overhead and good security performance.\nWe develop a kind of new pairwise key establishment algorithm with our hierarchical hypercube model.\nThe structure of this paper is arranged as follows: In section 3, a new distribution model of cluster deployed sensor networks is presented.\nIn section 4, a new Hierarchical Hypercube model is proposed.\nIn section 5, the mapping relationship between the clusters deployed sensor network and Hierarchical Hypercube model is discussed.\nIn section 6 and section 7, new pairwise key establishment algorithm are designed based on the Hierarchical Hypercube model and detailed analyses are described.\nFinally, section 8 presents a conclusion.\n2.\nPRELIMINARY Definition 1 (Key Predistribution): The procedure, which is used to encode the corresponding encryption and decryption algorithms in sensor nodes before distribution, is called Key Predistribution.\nDefinition 2 (Pairwise Key): For any two nodes A and B, if they have a common key E, then the key E is called a pairwise key between them.\nDefinition 3 (Key Path): For any two nodes A0 and Ak, when there has not a pairwise key between them, if there exists a path A0,A1,A2,......,Ak-1,Ak, and there exists at least one pairwise key between the nodes Ai and Aj for 0\u2264i\u2264k-1 and 1\u2264j\u2264k, then the path consisted of A0,A1,A2,......,Ak-1,Ak is called a Key Path between A0 and Ak.\nDefinition 4 (n-dimensional Hypercube): An n-dimensional Hypercube (or n\u2212cube) H(v,n) is a topology with the following properties: (1) It is consisted of n\u00b7vn-1 edges, (2) Each node can be coded as a string with n positions such as b1b2...bn, where 0\u2264b1,b2,...,bn\u2264v-1, (3) Any two nodes are called neighbors, which means that there is an edge between them, iff there is just one position different between their node codes.\n3.\nMODEL OF CLUSTERS DEPLOYED SENSOR NETWORKS In some actual applications of sensor networks, sensors can be deployed through airplanes.\nSupposing that the deployment rounds of sensors are k, and the communication radius of any sensors is r, then the sensors deployed in the same round can be regarded as belonging to a same Cluster.\nWe assign a unique cluster number l (1 \u2264 l \u2264 k) for each cluster.\nSupposing that the sensors form a connected graph in any cluster after deployment through airplanes, and then the Fig.1 presents an actual model of clusters deployed sensor networks.\nFigure.1 An actual model of clusters deployed sensor networks.\nFrom Figure.1, it is easy to know that, for a given node A, there exist lots of nodes in the same cluster of A, which can be communicated directly with A, since the nodes are deployed densely in a cluster.\nBut there exist much less nodes in a cluster neighboring to the cluster of A, which can be communicated directly with A. since the two clusters are not deployed at the same time.\n4.\nHIERARCHICAL HYPERCUBE MODEL Definition 5 (k-levels Hierarchical Hypercube): Let there are N nodes totally, then a k-levels Hierarchical Hypercube named H(k,u,m,v,n) can be constructed as follows: 1) The N nodes are divided into k clusters averagely, and the [N/k] nodes in any cluster are connected into an n-dimensional Hypercube: In the n-dimensional Hypercube, any node is encoded 55 as i1i2...in, which are called In-Cluster-Hypercube-Node-Codes, where 0 \u2264 i1,i2,...in \u2264 v-1,v=[ n kN / ],[j] equals to an integer not less than j.\nSo we can obtain k such kind of different hypercubes.\n2) The k different hypercubes obtained above are encoded as j1j2...jm, which are called Out-Cluster-Hypercube-Node-Codes, where 0 \u2264 j1,j2,...jm \u2264 u-1,u=[ m k ].\nAnd the nodes in the k different hypercubes are connected into m-dimensional hypercubes according to the following rules: The nodes with same In-Cluster-Hypercube-Node-Codes and different Out-ClusterHypercube-Node-Codes are connected into an m-dimensional hypercube.\n(The graph constructed through above steps is called a k-levels Hierarchical Hypercube abbreviated as H(k,u,m,v,n).)\n3) Any node A in H(k,u,m,v,n) can be encoded as (i, j), where i(i=i1i2...in, 0 \u2264 i1,i2,...in \u2264 v-1) is the In-Cluster-HypercubeNode-Code of node A, and j(j=j1j2...jm, 0 \u2264 j1,j2,...jm \u2264 u-1) is the Out-Cluster-Hypercube-Node-Code of node A. Obviously, the H(k,u,m,v,n) model has the following good properties: Property 1: The diameter of H(k,u,m,v,n) model is m+n. Proof: Since the diameter of n-dimensional hypercube is n, and the diameter of m-dimensional hypercube is m, so it is easy to know that the diameter of H(k,u,m,v,n) model is m+n from the definition 5.\nProperty 2: The distance between any two nodes A(i1, j1) and B(i2, j2) in H(k,u,m,v,n) model is d(A,B)= dh(i1, i2)+dh(j1, j2), where dh represents the Hamming distance.\nProof: Since the distance between any two nodes in hypercube equals to the Hamming distance between them, so it is obvious that the theorem 2``s conclusion stands from definition 5.\n5.\nMAPPING CLUSTERS DEPLOYED SENSOR NETWORKS TO H(K,U,M,V,N) Obviously, from the description in section 3 and 4, we can know that the clusters deployed sensor network can be mapped into a klevels- hierarchical hypercube model as follows: At first, the k clusters in the sensor network can be mapped into k different levels (or hypercubes) in the k-levels- hierarchical hypercube model.\nThen, the sensor nodes in each cluster can be encoded with the In-Cluster-Hypercube-Node-Codes, and the sensor nodes in the k different clusters with the same In-ClusterHypercube-Node-Codes can be encoded with the Out-ClusterHypercube-Node-Codes according to the definition 5 respectively.\nConsequently, the whole sensor network has been mapped into a k-levels- hierarchical hypercube model.\n6.\nH(K,U,M,V,N) MODEL-BASED PAIRWISE KEY PREDISTRIBUTION ALGORITHM FOR SENSOR NETWORKS In order to overcome the drawbacks of polynomial-based and polynomial pool-based key predistribution algorithms, this paper proposed an innovative H(k,u,m,v,n) model-based key predistribution scheme and pairwise key establishment algorithm, which combines the advantages of polynomial-based and key pool-based encryption schemes, and is based on the KDC and polynomials pool-based key predistribution models.\nThe new H(k,u,m,v,n) model-based pairwise key establishment algorithm includes three main steps: (1) Generation of the polynomials pool and key predistribution, (2) Direct pairwise key discovery, (3) Path key discovery.\n6.1 Generation of Polynomials Pool and Key Predistribution Supposing that, the sensor network includes N nodes, and is deployed through k different rounds.\nThen we can predistribute keys for each sensor node on the basis of the H(k,u,m,v,n) model as follows: Step 1: Key setup server randomly generates a bivariate polynomials pool such as the following: F={ f i iiil n >< \u2212121 ,...,,, (x,y), f j jjjinii m >< \u2212121 ,...,,,,...,2,1 (x,y) | 0 \u2264 iii n 121 ... \u2212\u2264\u2264\u2264 \u2264 v-1, 1 \u2264 i \u2264 n, 1 \u2264 l \u2264 k; 0 \u2264 jjj m 121 ... \u2212\u2264\u2264\u2264 \u2264 u-1 , 1 \u2264 j \u2264 m} with vn *m*um-1 +[N/vn ]*n*vn-1 different t-degree bivariate polynomials over a finite field Fq, and then assigns a unique polynomial ID to each bivariate polynomial in F. Step 2: In each round, key setup server assigns a unique node ID: (i1i2...in,j1j2...jm) to each sensor node from small to big, where 0 \u2264 i1,i2,...in \u2264 v-1, 0 \u2264 j1,j2,...jm \u2264 u-1.\nStep 3: key setup server assigns a unique cluster ID: l to all the sensor nodes deployed in the same round, where 1 \u2264 l \u2264 k. Step 4: key setup server predistributes m+n bivariate polynomials { f iiil n 1 ,...,,, 32 >< (i1,y),..., f n iiil n >< \u2212121 ,...,,, (in,y); f jjinii m 1 ,...,,,...,2,1 2 >< ( j1,y),..., f m jjinii m >< \u221211 ,...,,,...,2,1 ( jm,y) } and the corresponding polynomial IDs to the sensor node deployed in the lth round and with ID (i1i2...in, j1j2...jm).\n6.2 Direct Pairwise Key Discovery If the node A(i1i2...in,j1j2...jm) in the sensor network wants to establish pairwise key with a node B (i'1i'2...i`n,j'1j'2...j'm), then node A can establish pairwise key with the node B trough the following methods.\nFirstly, node A computes out the distance between itself and node B: d= d1+ d2, where d1=dh(i1i2...in, i'1i'2...i`n) and d2=dh(j1j2...jm, j'1j'2...j'm).\nIf d=1, then node A obtains the direct pairwise key between itself and node B according to the following theorem 1: Theorem 1: For any two sensor nodes A(i1i2...in,j1j2...jm) and B (i'1i'2...i`n,j'1j'2...j'm) in the sensor network, supposing that the 56 distance between nodes A and B is d= d1+ d2, where d1=dh(i1i2...in, i'1i'2...i`n) and d2=dh(j1j2...jm, j'1j'2...j'm).\nIf d=1, then there exists a direct pairwise key between nodes A and B. Poof: Since d=1, then there is d1=1, d2=0, or d1=0, d2=1.\n1) If d1=1, d2=0: From d2=0, there is nodes A, B belong to the same cluster.\nSupposing that nodes A, B belong to the same cluster l, then from d1=1 \u21d2 There is only one position different between i1i2...in and i``1i``2...i``n. Let it=i``t, when 1 \u2264 t \u2264 n-1, and in \u2260 i``n \u21d2 f n iiil n >< \u2212121 ,...,,, (in,i``n)= f n iiil n >\u2032\u2032\u2032< \u2212121 ,...,,, (i``n,in).\nSo, there exists a direct pairwise key f n iiil n >< \u2212121 ,...,,, (in,i``n) between nodes A and B. 2) If d1=0, d2=1: From d2=1 \u21d2 There is only one position different between j1j2...jm and j``1j``2...j``m. Let jt=j``t, when 1 \u2264 t \u2264 m1, and jm \u2260 j``m.\nSince d1=0 \u21d2 i1i2...in equals to i``1i``2...i``n \u21d2 f m jjjinii m >< \u2212121 ,...,,,,...,2,1 (jm, j``m)= f m jjji nii m >\u2032\u2032\u2032\u2032\u2032\u2032< \u2212121 ,...,,,,...,2,1 (j``m,jm).\nSo, there exists a direct pairwise key f m jjjinii m >< \u2212121 ,...,,,,...,2,1 (jm, j``m) between nodes A and B.\nAccording to theorem 1, we present the detailed description of the direct pairwise key discovery algorithm as follows: Step 1: Obtain the node IDs and cluster IDs of the source node A and destination node B; Step 2: Compute out the distance between nodes A and B: d= d1+ d2; Step 3: If d1=1, d2=0, then select out a common polynomial share of nodes A and B from { f iiil n 1 ,...,,, 32 >< ,..., f n iiil n >< \u2212121 ,...,,, } to establish direct pairwise key; Step 4: If d1=0, d2=1, then select out a common polynomial share of nodes A and B from { f jjinii m 1 ,...,,,...,2,1 2 >< ,..., f m jjjinii m >< \u2212121 ,...,,,,...,2,1 } to establish direct pairwise key; Step 5: Otherwise, there exists no direct pairwise key between nodes A and B. And then turn to the following path key discovery process.\n6.3 Path Key Discovery If d>1, then node A can establish path key with node B according to the following theorem 2: Theorem 2: For any two sensor nodes A(i1i2...in,j1j2...jm) and B (i'1i'2...i`n,j'1j'2...j'm) in the sensor network, supposing that the distance between nodes A and B is d= d1+ d2, where d1=dh(i1i2...in, i'1i'2...i`n) and d2=dh(j1j2...jm, j'1j'2...j'm).\nIf d>1, then there exists a path key between nodes A and B. Proof: Let d1=a, d2=b, then we can think that it \u2260 i`t, when 1 \u2264 t \u2264 a; but it=i`t, when t>a; and jt \u2260 j't, when 1 \u2264 t \u2264 b; but jt=j't, when t>b. Obviously, nodes A(i1i2...in, j1j2...jm) ,(i'1i2 i3...in, j1j2...jm),(i'1i'2 i3...in, j1j2...jm),...,(i'1i'2...i`n, j1j2...jm) belong to the same cluster.\nSo, according to the supposing condition of The nodes in the same cluster form a connected graph, there is a route among those nodes.\nIn addition, in those nodes, the distance between any two neighboring nodes is 1, so from theorem 1, it is easy to know that there exists direct pairwise key between any two neighboring nodes among those nodes.\nFor nodes (i'1i'2...i`n,j1j2...jm), (i'1i'2...i`n,j'1 j2 j3...jm), (i'1i'2...i`n,j'1j'2 j3...jm-1 jm),..., (i'1i'2...i`n,j'1j'2...j'm-1jm), since they have the same Out-Cluster-Hypercube-Node-Codes with the node B(i'1i'2...i`n,j'1j'2...j'm), so nodes (i'1i'2...i`n,j1j2...jm), (i'1i'2...i`n,j'1 j2 j3...jm), (i'1i'2...i`n,j'1j'2 j3...jm-1 jm),..., (i'1i'2...i`n,j'1j'2...j'm-1 jm) and node B belong to a same logical hypercube.\nObviously, from the supposing condition of The whole sensor network forms a connected graph, there is a route among those nodes.\nIn addition, in those nodes, the distance between any two neighboring nodes is 1, so from theorem 1, it is easy to know that there exists direct pairwise key between any two neighboring nodes among those nodes.\nSo, it is obvious that there exists a path key between nodes A and B.\nAccording to theorem 2, we present the detailed description of the path key discovery algorithm as follows: Step 1: Compute out the intermediate nodes (i'1i2 i3...in,j1j2...jm), (i'1i'2 i3...in,j1j2...jm),...,(i'1i'2...i`n, j1j2...jm) and (i'1i'2...i`n,j1`j2 j3...jm), (i'1i'2...i`n,j'1j'2 j3...j'm-1 jm),...,(i'1i'2...i`n,j'1j'2...j'm-1 jm) from the source node A(i1i2...in,j1j2...jm) and the destination node B (i'1i'2...i`n,j'1j'2...j'm).\nStep 2: In those nodes series A(i1i2...in,j1j2...jm), (i'1i2 i3...in,j1j2...jm), (i'1i'2 i3...in,j1j2...jm),...,(i'1i'2...i`n,j1j2...jm) and (i'1i'2...i`n,j1`j2 j3...jm), (i'1i'2...i`n,j'1j'2 j3...j'm-1 jm),..., (i'1i'2...i`n, j'1j'2...j'm-1 jm), B (i'1i'2...i`n,j'1j'2...j'm), the neighboring nodes select their common polynomial share to establish direct pairwise key.\nFrom theorem 2, it is easy to know that any source node A can compute out a key path P to the destination node B according to the above algorithm, when there are no compromised nodes in the sensor network.\nOnce the key path P is computed out, then node A can send messages to B along the path P to establish indirect pairwise key with node B. Fig.2 presents a example of key path establishment.\nFigure.2 Key path establishment example.\nFor example: In the above Figure.2, node A((012),(1234)) can establish pairwise key with node B((121),(2334)) through the following key path: A((012),(1234)) \u2192 C((112),(1234)) \u2192 D((122),(1234)) \u2192 E((121),(1234)) \u2192 F((121),(2234)) \u2192 B((121),(2334)), where node F shall route through nodes G, H, I, J to establish direct pairwise key with node B. 57 According to the properties of H(k,u,m,v,n) model, we can prove that the following theorem by combing the proof of theorem 2: Theorem 3: Supposing that there exist no compromised nodes in the sensor network, and the distance between node A and B, then there exists a shortest key path with k distance between node A and B logically.\nThat is to say, node A can establish indirect pairwise key with node B through t-1 intermediate nodes.\nProof: Supposing that the distance between node A(i1i2...in, j1j2...jm) and B (i'1i'2...i`n, j'1j'2...j'm) is d=d1+ d2, where d1=dh(i1i2...in, i'1i'2...i`n), d2=dh(j1j2...jm, j'1j'2...j'm).\nSince d=t, according to the construction properties of H(k,u,m,v,n), it is easy to know that there exist t-1 intermediate nodes I1,...,It-1, in the logical space H(k,u,m,v,n), which satisfy that the distance between any two neighboring nodes in the nodes series A, I1,...,It1, B equals to 1.\nSo according to the theorem 1, we can know that nodes A, I1,...,It-1, B form a correct key path between node A and B.\nIf any two neighboring nodes in the nodes series A, I1,...,It-1, B can communicate directly, then node A can establish indirect pairwise key with node B through those t-1 intermediate nodes.\n6.4 Dynamic Path Key Discovery The path key discovery algorithm proposed in the above section can establish a key path correctly, only when there exist no compromised nodes in the whole sensor network, since the key path is computed out beforehand.\nAnd the proposed algorithm cannot find an alternative key path when there exist some compromised nodes or some intermediate nodes not in the communication radius, even that there exists other alternative key paths in the sensor network.\nFrom the following example we can know that there are many parallel paths in the H(k,u,m,v,n) model for any two given source and destination nodes, since the H(k,u,m,v,n) model is high fault-tolerant[9,10] .\nFigure.3 Alternative key path establishment example.\nFor example: Considering the key path establishment example given in the above section based on Figure.2: A((012),(1234)) \u2192 C((112),(1234)) \u2192 D((122),(1234)) \u2192 E((121),(1234)) \u2192 F((121),(2234)) \u2192 B((121),(2334)), supposing that node F((121),(2234)) has compromised, then from Figure.3, we can know that there exists another alternative key path as A((012),(1234)) \u2192 C((112),(1234)) \u2192 D((122),(1234)) \u2192E((121),(1234)) \u2192 M((121),(1334)) \u2192 B((121),(2334)), which can be used to establish the indirect pairwise key between node A and B, where node E shall route through nodes D and K to establish direct pairwise key with node M, and node M shall route through nodes N, O, G, H, I, J to establish direct pairwise key with node B.\nSince the sensors are source limited, so they are easy to die or out of the communication radius, therefore the algorithm proposed in the above section cannot guarantee to establish correct key path efficiently.\nIn this section, we will propose a dynamic path key discovery algorithm as follows, which can improve the probability of key path effectively: Algorithm I: Dynamic key path establishment algorithm based on H(k,u,m,v,n) model for cluster deployed sensor networks.\nInput: Sub-sensor network H(k,u,m,v,n), which has some compromised /fault sensors and fault links, And two reachable nodes A(a1...an,a``1...a``m) and B(b1...bn,b``1...b``m) in H(k,u,m,v,n), where a``t \u2260 b``t, t\u2208[1,s], a``t=b``t, t >s. Output: A correct key path from node A to B in H(k,u,m,v,n).\nStep 1: Obtain the code strings of node A and B: A \u2190 (a1...an,a``1...a``m), B \u2190 (b1...bn,b``1...b``m), where aj, bj [0,\u2208 u-1], a``j, b``j [0,\u2208 v-1].\nStep 2: If a``1...a``m = b``1...b``m, then node A can find a route to B according to the routing algorithms of hypercube [9-10].\nStep 3: Otherwise, node A can find a route to C(b1...bn, a``1...a``m) according to the Algorithm I or Algorithm II.\nThen let I0=C(b1...bn,a``1...a``m), I1=(b1...bn,b``1 a``2...a``m),..., Is=B(b1...bn,b``1 b``2...b``s a``s+1...a``m), and each node It in the above nodes series find a route to its neighboring node It+1 on the basis of the location information (Detailed routing algorithms based on location information can see the references[11-14]).\nStep 4: Algorithm exits.\nIf such kind of a correct key path exists, then through which node A can establish an indirect pairwise key with node B. Otherwise, node A fails to establish an indirect pairwise key with node B. And node A will tries again to establish an indirect pairwise key with node B some time later.\n7.\nALGORITHM ANALYSES 7.1 Practical Analyses According to the former description and analyses, it is easy to know that the above newly proposed algorithm has the following properties: Property 3: When there exist no fault and compromised nodes, by using new pairwise key predistribution scheme based on H(k,u,m,v,n) model, the probability of direct pairwise key establishment between any two nodes can be estimated as P=(m(u-1)+n(v-1))/(N-1), where N is the total number of nodes in the sensor network, and N=um * vn .\nProof: Since the predistributed pairwise keys for any node FA ={ f iiil n 1 ,...,,, 32 >< (i1,y),..., f n iiil n >< \u2212121 ,...,,, (in,y); f jjinii m 1 ,...,,,...,2,1 2 >< (j1 ,y),..., f m jjinii m >< \u221211 ,...,,,...,2,1 ( jm,y) } in the newly proposed algorithm.\nObviously, in the logical hypercube formed by the nodes in the same cluster of node A, there are n(v-1) nodes, which 58 have direct pairwise key with node A. And in the logical hypercube formed by the nodes in different clusters from that of node A, there are m(u-1) nodes, which have direct pairwise key with node A. Therefore, there are totally m(u-1)+n(v-1) nodes, which have direct pairwise key with node A. So, the probability of pairwise key establishment between any two nodes can be estimated as P=(m(u-1)+n(v-1))/(N-1), since the whole sensor network has N sensor nodes in all.\nFigure.4 presents the comparision between the probability of direct pairwise key establishment between any two nodes and the dimension n, when the sensor network has different total nodes, and use the new pairwise key predistribution scheme based on H(8,2,3,v,n) model.\n2 3 4 5 6 7 8 9 10 0 0.002 0.004 0.006 0.008 0.01 Number of Dimension ProbabilitytoEstablishDirectKey N = 8000 N=10000 N=20000 N=30000 Figure.4 Comparision between the probability of direct pairwise key establishment between any two nodes and the dimension n, when the sensor network has different total nodes, and use the new pairwise key predistribution scheme based on H(8,2,3,v,n) model.\nFrom Figure.4, it is easy to know that by using new pairwise key predistribution scheme based on H(k,u,m,v,n) model, the probability of direct pairwise key establishment between any two nodes decreases with the increasing of the scale of the sensor networks, and in addition, the probability of direct pairwise key establishment between any two nodes decreases with the increasing of the dimension n, when the scale of the sensor network is fixed.\nTheorem 4: Supposing that the total sensors is N in the sensor network, then when u \u2265 v2 , the probability of direct pairwise key establishment between any two nodes, when using the key distribution scheme based on the hypercube model H(v,p), is smaller than that when using the key distribution scheme based on the H(k,u,m,v,n) model.\nProof: Since u \u2265 v, then we can let u=vt , where t \u2265 2.\nSince the total number of nodes in H(v,p) is vp =N, the total number of nodes in H(k,u,m,v,n) is um * vn =N. Let p=x+n, then there is um *vn = vx * vn \u21d2 um =vx \u21d2 x=tm.\nFrom the property 3, it is easy to know that the probability of direct pairwise key establishment between any two nodes can be estimated as P=(m(u-1)+n(v-1))/(N-1).\nAccording to the description in [7], it is well know that the probability of direct pairwise key establishment between any two nodes can be estimated as P''= p(v-1)/(N-1)= (x(v-1)+n(v-1))/(N-1).\nNext, we will prove that m(u-1) \u2265 x(v-1): m(u-1)= m(vt -1), x(v-1)= tm(v-1).\nConstruct a function as f(t)= vt -1- t(v-1), where t \u2265 2.\nWhen t=2, it is obvious that there is f(t)= vt -2v+1=-LRB- v-1)2 \u2265 0 and f''(t)=t vt-1 - v+1 \u2265 2v- v+1= v+1>0.\nSo, there is f(t) \u2265 0 \u21d2 vt -1 \u2265 t(v-1) \u21d2 m(vt -1) \u2265 tm(v-1) \u21d2 m(u1) \u2265 x(v-1).\nTherefore, the conclusion of the theorem stands.\nAs for the conclusion of theorem 4, we give an example to illustrate.\nSupposing that the total number of nodes in the sensor network is N=214 , and H(k,u,m,v,n)=H(16,4,2,2,10), H(v,p)= H(10,14), then the probability of direct pairwise key establishment between any two nodes based on the H(k,u,m,v,n) model is P= (m(u-1)+n(v1))/(N-1)= (2(4-1)+10(2-1))/(214 -1)=16/(214 -1), but the probability of direct pairwise key establishment between any two nodes based on the H(v,p) model is P''= p(v-1)/(N-1)=14(2-1)/(214 -1)= 14/(214 1).\nSupposing that the total number of nodes in the sensor network is N, Figure.5 illustrates the comparison between the probability of direct pairwise key establishment between any two nodes based on the H(k,u,m,v,n) model and the probability of direct pairwise key establishment between any two nodes based on the H(v,p) model, when u=4 and v=2.\n1 2 3 4 5 6 7 8 9 10 0 0.5 1 1.5 x 10 -3 scaleofthesensornetwork ProbabilitytoEstablishDirectKey H(k,u,m,v,n)model-based H(v,p)model-based Figure.5 Comparison between the probability of direct pairwise key establishment between H(v,n) and H(k,u,m,v,n) models.\nFrom Figure.5, it is easy to know that the theorem 5 stands.\nTheorem 5: Supposing that the total sensors is N in the sensor network, then the pairwise key distribution scheme based on the hypercube model H(v,p), is only a special case of the pairwise key distribution scheme based on the H(k,u,m,v,n) model.\nProof: As for the pairwise key distribution scheme based on the H(k,u,m,v,n) model, let k=1 (u=1, m=0), which means that the total sensor network includes only one cluster.\nThen obviously, the H(k,u,m,v,n) model will degrade into the H(v,n) model.\nAccording to the former anayses in this paper and the definition of the pairwise key distribution scheme based on the hypercube model H(v,p) in [7], it is easy to know that the conclusion of the theorem stands.\n59 7.2 Security Analyses By using the pairwise key establishment algorithm based on the H(k,u,m,v,n) model, the intruders can launch two kinds of attacks: 1) The attackers may target the pairwise key between two particular sensor node, in order to compromise the pairwise key between them, or prevent them to establish pairwise key.\n2) The attackers may attack against the whole sensor network, inorder to decrease the probability of the pairwise key establishment, or increase the cost of the pairwise key establishment.\nAttacks against a Pair of sensor nodes 1.\nSupposing that the intruders want to attack two particular sensor nodes u,v, where u,v are all not compromised nodes, but the intruders want to compromise the pairwise key between them.\n1) If u,v can establish direct pairwise key, then the only way to compromise the key is to compromise the common bivariate polynomial f(x,y) between u,v.\nSince the degree of the bivariate polynomial f(x,y) is t, so the intruders need to compromise at least t+1 sensor nodes that have a share of the bivariate polynomial f(x,y).\n2) If u,v can establish indirect pairwise key through intermediate nodes, then the intruders need to compromise at least one intermediate node, or compromise the common bivariate polynomial f(x,y) between two neighboring intermediate nodes.\nBut even if the intruders succeed to do that, node u and v can still reestablish indirect pairwise key through alternative intermediate nodes.\n2.\nSupposing that the intruders want to attack two particular sensor nodes u,v, where u,v are all not compromised nodes, but the intruders want to prevent them to establish the pairwise key.\nThen, the intruders need to compromise all of the m+n bivariate polynomials of node u or v.\nSince the degree of the bivariate polynomial f(x,y) is t, so for bivariate polynomial, the intruders need to compromise at least t+1 sensor nodes that have a share of the given bivariate polynomial.\nTherefore, the intruders need to compromise (m+n)(t+1) sensor nodes altogether to prevent u,v to establish the pairwise key.\nAttacks against the sensor network Supposing that the Attackers know the distribution of the polynomials over sensor nodes, it may systematically attack the network by compromising the polynomials in F one by one in order to compromise the entire network.\nAssume the fraction of the compromised polynomials is pc, then there are up to N''=pc \u00d7 { vn v N umv n n mn \u00d7\u00d7+\u00d7\u00d7 ][ }= pc \u00d7\u00d7N (m+n) Sensor nodes that have at least one compromised polynomial share.\nAmong all of the remaining N- N'' sensor nodes, none of them includes a compromised polynomial share.\nSo, the remaining N- N'' sensor nodes can establish direct pairwise key by using any one of their polynomial shares.\nHowever, the indirect pairwise keys in the remaining N- N'' sensor nodes may be affected.\nAnd they may need to re-establish a new indirect pairwise key between them by select alternative intermediate nodes that do not belong to the N'' compromised nodes.\nSupposing that the scale of the sensor network is N=10000, Figure.6 presents the comparison between pc and the number of sensor nodes with at least one compromised polynomial share in sensor networks based on different H(k,u,m,v,n) distribution models.\nFrom Figure.6, it is easy to know that, when the scale of the sensor network is fixed, the number of the affected sensor nodes in the sensor network increases with the increasing of the number of compromised nodes.\n0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0 1000 2000 3000 4000 5000 6000 7000 8000 F rac tion of C om prom is ed B ivariate P oly nom ialsSensorNodeswithatleastoneCompromisedPolynomialShare H (1,0,0,100,2) H (2,2,1,71,2) H (4,2,2,50,2) H (8,2,3,36,2) Figure.6 the comparison between pc and the number of sensor nodes with at least one compromised polynomial share in sensor networks based on different H(k,u,m,v,n) distribution models.\nTheorem 6: Supposing that the total sensors is N in the sensor network, and the fraction of compromised nodes is pc, then when u>v, the number of affected nodes of the H(v,p) model based key predistribution scheme, is bigger than that of the H(k,u,m,v,n) model based key predistribution scheme.\nProof: Since the number of affected nodes of the H(k,u,m,v,n) model based key predistribution scheme is pc \u00d7\u00d7N (m+n), and it is proved in [7] that the number of affected nodes of the H(v,p) model based key predistribution scheme is pc \u00d7\u00d7N p. Let p=x+n, then there is um * vn = vx * vn \u21d2 um =vx .\nSince u>v \u21d2 x>m \u21d2 pc \u00d7\u00d7N (m+n)< pc \u00d7\u00d7N (x+n)= pc \u00d7\u00d7N p. Supposing that the scale of the sensor network is N=10000, Figure.7 presents the comparison between pc and the number of sensor nodes with at least one compromised polynomial share in sensor networks based on H(9,3,2,2,n) and H(2,p) distribution models.\nFrom Figure.7, it is easy to know that the conclusion of theorem 9 is correct, and the number of the affected sensor nodes in the sensor network increases with the increasing of the number of compromised nodes, when the scale of the sensor network is fixed.\n60 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0.11 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 Fraction of Compromised Bivariate Polynomials SensorNodeswithatleastoneCompromisedPolynomialShare H(9,3,2,34,2) H(16,4,2,25,2) H(225,15,2,7,2) H(1296,36,2,3,2) H(2,14) Figure.7 the comparison between pc and the number of sensor nodes with at least one compromised polynomial share in sensor networks based on H(9,3,2,2,n) and H(2,p) distribution models.\n8.\nCONCLUSION A new hierarchical hypercube model named H(k,u,m,v,n) is proposed, which can be used for pairwise key predistribution for cluster deployed sensor networks.\nAnd Based on the H(k,u,m,v,n) model, an innovative pairwise key predistribution scheme and algorithm are designed respectively, by combing the good properties of the Polynomial Key and Key Pool encryption schemes.\nThe new algorithm uses the good characteristics of node codes and high fault-tolerance of H(k,u,m,v,n) model to route and predistribute pairwise keys, in which nodes are not needed to be able to communicate with each other directly such as that the algorithms proposed by [7] shall need.\nSo, the traditional pairwise key predistribution algorithm based on hypercube model [7] is only a special case of the new algorithm proposed in this paper.\nTheoretical and experimental analyses show that the newly proposed algorithm is an efficient pairwise key establishment algorithm that is suitable for the cluster deployed sensor networks.\n9.\nACKNOWLEDGMENTS Our thanks to ACM SIGCHI for allowing us to modify templates they had developed, and to nature science fund of Fujian province of PR.China under grant No.A0510024.\n10.\nREFERENCES [1] L. Eschenauer and V. Gligor.\nA key-management scheme for distribute sensor networks.\nIn Proceedings of the 9th ACM Conference on Computer and Communication Security.\nACM Press, Washington DC, USA, 2002, 41-47.\n[2] H. Chan, A. Perrig, and D. Song.\nRandom key predistribution schemes for sensor networks.\nIn IEEE Symposium on Security and Privacy.\nIEEE Computer Society, California, USA, 2003, 197-213.\n[3] C. Blundo, A. D. Santis, A. Herzberg, S. Kutten, U. Vaccaro, and M. Yung.\nPerfectly-secure key distribution for dynamic conferences.\nLecture Notes in Computer Science.\n1993, 740, 471-486.\n[4] D. Liu and P. Ning.\nEstablishing pairwise keys in distributed sensor networks.\nIn Proceedings of the 10th ACM Conference on Computer and Communications Security.\nACM Press, Washingtion, DC, USA, 2003, 52-61.\n[5] W. Du, J. Deng, Y. Han, and P. Varshney.\nA pairwise key pre-distribution scheme for wireless sensor networks.\nIn Proceedings of the Tenth ACM Conference on Computer and Communications Security.\nWashingtion, DC, USA,2003, 4251.\n[6] R. Blom.\nAn optimal class of symmetric key generation systems.\nAdvances in Cryptology: Proceedings of EUROCRYPT 84.\nLecture Notes in Computer Science.\n1985, 209, :335-338.\n[7] Donggang Liu, Peng Ning, Rongfang Li, Establishing Pairwise Keys in Distributed Sensor Networks.\nACM Journal Name, 2004, 20, 1-35.\n[8] L. Fang, W. Du, and N. Peng.\nA Beacon-Less Location Discovery Scheme for Wireless Sensor Networks, INFOCOM 2005.\n[9] Wang Lei, Lin Ya-ping, Maximum safety path matrix based fault-tolerant routing algorithm for hypercube interconnection network.\nJournal of software.\n2004,15(7), 994-1004.\n[10] Wang Lei, Lin Ya-ping, Maximum safety path vector based fault-tolerant routing algorithm for hypercube interconnection network.\nJournal of China Institute of Communications.\n2004, 16(4), 130-137.\n[11] Lin Ya-ping, Wang Lei, Location information based hierarchical data congregation routing algorithm for sensor networks.\nChinese Journal of electronics.\n2004, 32(11), 1801-1805.\n[12] W. Heinzelman, J. Kulik, and H. Balakrishnan, Negotiation Based Protocols for Disseminating Information in Wireless Sensor Networks.\nACM Wireless Networks.\n2002, 8, 169185.\n[13] Manjeshwar,A.; Agrawal,D.P. TEEN: a routing protocol for enhanced efficiency in wireless sensor networks].\nIn Proceedings of 15th Parallel and Distributed Processing Symposium].\nIEEE Computer Society, San Francisco, USA, 2001, 2009-2015.\n[14] B. Krishnamachari, D. Estrin, and S. Wicker.\nModelling Data-Centric Routing in Wireless Sensor Networks.\nIn Proceedings of IEEE Infocom, 2002.\n61", "lvl-3": "Researches on Scheme of Pairwise Key Establishment for Distributed Sensor Networks\nABSTRACT\nSecurity schemes of pairwise key establishment , which enable sensors to communicate with each other securely , play a fundamental role in research on security issue in wireless sensor networks .\nA new kind of cluster deployed sensor networks distribution model is presented , and based on which , an innovative Hierarchical Hypercube model - H ( k , u , m , v , n ) and the mapping relationship between cluster deployed sensor networks and the H ( k , u , m , v , n ) are proposed .\nBy utilizing nice properties of H ( k , u , m , v , n ) model , a new general framework for pairwise key predistribution and a new pairwise key establishment algorithm are designed , which combines the idea of KDC ( Key Distribution Center ) and polynomial pool schemes .\nFurthermore , the working performance of the newly proposed pairwise key establishment algorithm is seriously inspected .\nTheoretic analysis and experimental figures show that the new algorithm has better performance and provides higher possibilities for sensor to establish pairwise key , compared with previous related works .\n1 .\nINTRODUCTION\nSecurity communication is an important requirement in many sensor network applications , so shared secret keys are used between communicating nodes to encrypt data .\nAs one of the most fundamental security services , pairwise key establishment enables the sensor nodes to communicate securely with each other using cryptographic techniques .\nHowever , due to the sensor nodes ' limited computational capabilities , battery energy , and available memory , it is not feasible for them to use traditional pairwise key establishment techniques such as public key cryptography and key distribution center ( KDC ) .\nSeveral\nalternative approaches have been developed recently to perform pairwise key establishment on resource-constrained sensor networks without involving the use of traditional cryptography [ 14 ] .\nEschenauer and Gligor proposed a basic probabilistic key predistribution scheme for pairwise key establishment [ 1 ] .\nIn the scheme , each sensor node randomly picks a set of keys from a key pool before the deployment so that any two of the sensor nodes have a certain probability to share at least one common key .\nChan et al. further extended this idea and presented two key predistribution schemes : a q-composite key pre-distribution scheme and a random pairwise keys scheme .\nThe q-composite scheme requires any two sensors share at least q pre-distributed keys .\nThe random scheme randomly picks pair of sensors and assigns each pair a unique random key [ 2 ] .\nInspired by the studies above and the polynomial-based key pre-distribution protocol [ 3 ] , Liu et al. further developed the idea addressed in the previous works and proposed a general framework of polynomial pool-based key predistribution [ 4 ] .\nThe basic idea can be considered as the combination of the polynomial-based key pre-distribution and the key pool idea used in [ 1 ] ] and [ 2 ] .\nBased on such a framework , they presented two pairwise key pre-distribution schemes : a random subset assignment scheme and a grid-based scheme .\nA polynomial pool is used in those schemes , instead of using a key pool in the previous techniques .\nThe random subset assignment scheme assigns each sensor node the secrets generated from a random subset of polynomials in the polynomial pool .\nThe gridbased scheme associates polynomials with the rows and the columns of an artificial grid , assigns each sensor node to a unique coordinate in the grid , and gives the node the secrets generated from the corresponding row and column polynomials .\nBased on this grid , each sensor node can then identify whether it can directly establish a pairwise key with another node , and if not , what intermediate nodes it can contact to indirectly establish the pairwise key .\nA similar approach to those schemes described by Liu et al was independently developed by Du et a. [ 5 ] .\nRather than on Blundo 's scheme their approach is based on Blom 's scheme [ 6 ] .\nIn some cases , it is essentially equivalent to the one in [ 4 ] .\nAll of those schemes above improve the security over the basic probabilistic key pre-distribution scheme .\nHowever , the pairwise key establishment problem in sensor networks is still not well solved .\nFor the basic probabilistic and the q-composite key predistribution schemes , as the number of compromised nodes increases , the fraction of affected pairwise keys increases quickly .\nAs a result , a small number of compromised nodes may affect a\nlarge fraction of pairwise keys [ 3 ] .\nThough the random pairwise keys scheme doses not suffer from the above security problem , it incurs a high memory overhead , which increases linearly with the number of nodes in the network if the level of security is kept constant [ 2 ] [ 4 ] .\nFor the random subset assignment scheme , it suffers higher communication and computation overheads .\nIn 2004 , Liu proposed a new hypercube-based pairwise key predistribution scheme [ 7 ] , which extends the grid-based scheme from a two dimensional grid to a multi-dimensional hypercube .\nThe analysis shows that hypercube-based scheme keeps some attractive properties of the grid-based scheme , including the guarantee of establishing pairwise keys and the resilience to node compromises .\nAlso , when perfect security against node compromise is required , the hypercube-based scheme can support a larger network by adding more dimensions instead of increasing the storage overhead on sensor nodes .\nThough hypercube-based scheme ( we consider the grid-based scheme is a special case of hypercube-based scheme ) has many attractive properties , it requires any two nodes in sensor networks can communication directly with each other .\nThis strong assumption is impractical in most of the actual applications of the sensor networks .\nIn this paper , we present a kind of new cluster-based distribution model of sensor networks , and for which , we propose a new pairwise key pre-distribution scheme .\nThe main contributions of this paper are as follows : Combining the deployment knowledge of sensor networks and the polynomial pool-based key pre-distribution , we setup a clusterbased topology that is practical with the real deployment of sensor networks .\nBased on the topology , we propose a novel cluster distribution based hierarchical hypercube model to establish the pairwise key .\nThe key contribution is that our scheme dose not require the assumption of all nodes can directly communicate with each other as the previous schemes do , and it still maintains high probability of key establishment , low memory overhead and good security performance .\nWe develop a kind of new pairwise key establishment algorithm with our hierarchical hypercube model .\nThe structure of this paper is arranged as follows : In section 3 , a new distribution model of cluster deployed sensor networks is presented .\nIn section 4 , a new Hierarchical Hypercube model is proposed .\nIn section 5 , the mapping relationship between the clusters deployed sensor network and Hierarchical Hypercube model is discussed .\nIn section 6 and section 7 , new pairwise key establishment algorithm are designed based on the Hierarchical Hypercube model and detailed analyses are described .\nFinally , section 8 presents a conclusion .\n2 .\nPRELIMINARY\nDefinition 1 ( Key Predistribution ) : The procedure , which is used to encode the corresponding encryption and decryption algorithms in sensor nodes before distribution , is called Key Predistribution .\nDefinition 2 ( Pairwise Key ) : For any two nodes A and B , if they have a common key E , then the key E is called a pairwise key between them .\nDefinition 3 ( Key Path ) : For any two nodes A0 and Ak , when there has not a pairwise key between them , if there exists a path A0,A1,A2,...,Ak-1 , Ak , and there exists at least one pairwise key between the nodes Ai and Aj for 05i5k-1 and 15j5k , then the path consisted of A0 , A1 , A2 , ... , Ak-1 , Ak is called a Key Path between A0 and Ak .\nDefinition 4 ( n-dimensional Hypercube ) : An n-dimensional Hypercube ( or n \u2212 cube ) H ( v , n ) is a topology with the following properties : ( 1 ) It is consisted of n \u00b7 vn-1 edges , ( 2 ) Each node can be coded as a string with n positions such as b1b2 ... bn , where 05b1,b2,...,bn5v-1 , ( 3 ) Any two nodes are called neighbors , which means that there is an edge between them , iff there is just one position different between their node codes .\n3 .\nMODEL OF CLUSTERS DEPLOYED SENSOR NETWORKS\n4 .\nHIERARCHICAL HYPERCUBE MODEL\n5 .\nMAPPING CLUSTERS DEPLOYED SENSOR NETWORKS TO H ( K , U , M , V , N )\n6 .\nH ( K , U , M , V , N ) MODEL-BASED PAIRWISE KEY PREDISTRIBUTION ALGORITHM FOR SENSOR NETWORKS\n6.1 Generation of Polynomials Pool and Key Predistribution\n6.2 Direct Pairwise Key Discovery\n6.3 Path Key Discovery\nFigure .2 Key path establishment example .\n6.4 Dynamic Path Key Discovery\n7 .\nALGORITHM ANALYSES\n7.1 Practical Analyses\n7.2 Security Analyses\nAttacks against a Pair of sensor nodes\nAttacks against the sensor network\nFraction of Compromised B ivariate Polynomials\nFraction of Compromised Bivariate Polynomials\n8 .\nCONCLUSION\nA new hierarchical hypercube model named H ( k , u , m , v , n ) is proposed , which can be used for pairwise key predistribution for cluster deployed sensor networks .\nAnd Based on the H ( k , u , m , v , n ) model , an innovative pairwise key predistribution scheme and algorithm are designed respectively , by combing the good properties of the Polynomial Key and Key Pool encryption schemes .\nThe new algorithm uses the good characteristics of node codes and high fault-tolerance of H ( k , u , m , v , n ) model to route and predistribute pairwise keys , in which nodes are not needed to be able to communicate with each other directly such as that the algorithms proposed by [ 7 ] shall need .\nSo , the traditional pairwise key predistribution algorithm based on hypercube model [ 7 ] is only a special case of the new algorithm proposed in this paper .\nTheoretical and experimental analyses show that the newly proposed algorithm is an efficient pairwise key establishment algorithm that is suitable for the cluster deployed sensor networks .", "lvl-4": "Researches on Scheme of Pairwise Key Establishment for Distributed Sensor Networks\nABSTRACT\nSecurity schemes of pairwise key establishment , which enable sensors to communicate with each other securely , play a fundamental role in research on security issue in wireless sensor networks .\nA new kind of cluster deployed sensor networks distribution model is presented , and based on which , an innovative Hierarchical Hypercube model - H ( k , u , m , v , n ) and the mapping relationship between cluster deployed sensor networks and the H ( k , u , m , v , n ) are proposed .\nBy utilizing nice properties of H ( k , u , m , v , n ) model , a new general framework for pairwise key predistribution and a new pairwise key establishment algorithm are designed , which combines the idea of KDC ( Key Distribution Center ) and polynomial pool schemes .\nFurthermore , the working performance of the newly proposed pairwise key establishment algorithm is seriously inspected .\nTheoretic analysis and experimental figures show that the new algorithm has better performance and provides higher possibilities for sensor to establish pairwise key , compared with previous related works .\n1 .\nINTRODUCTION\nSecurity communication is an important requirement in many sensor network applications , so shared secret keys are used between communicating nodes to encrypt data .\nAs one of the most fundamental security services , pairwise key establishment enables the sensor nodes to communicate securely with each other using cryptographic techniques .\nHowever , due to the sensor nodes ' limited computational capabilities , battery energy , and available memory , it is not feasible for them to use traditional pairwise key establishment techniques such as public key cryptography and key distribution center ( KDC ) .\nSeveral\nalternative approaches have been developed recently to perform pairwise key establishment on resource-constrained sensor networks without involving the use of traditional cryptography [ 14 ] .\nEschenauer and Gligor proposed a basic probabilistic key predistribution scheme for pairwise key establishment [ 1 ] .\nIn the scheme , each sensor node randomly picks a set of keys from a key pool before the deployment so that any two of the sensor nodes have a certain probability to share at least one common key .\nChan et al. further extended this idea and presented two key predistribution schemes : a q-composite key pre-distribution scheme and a random pairwise keys scheme .\nThe q-composite scheme requires any two sensors share at least q pre-distributed keys .\nThe random scheme randomly picks pair of sensors and assigns each pair a unique random key [ 2 ] .\nBased on such a framework , they presented two pairwise key pre-distribution schemes : a random subset assignment scheme and a grid-based scheme .\nA polynomial pool is used in those schemes , instead of using a key pool in the previous techniques .\nThe random subset assignment scheme assigns each sensor node the secrets generated from a random subset of polynomials in the polynomial pool .\nThe gridbased scheme associates polynomials with the rows and the columns of an artificial grid , assigns each sensor node to a unique coordinate in the grid , and gives the node the secrets generated from the corresponding row and column polynomials .\nBased on this grid , each sensor node can then identify whether it can directly establish a pairwise key with another node , and if not , what intermediate nodes it can contact to indirectly establish the pairwise key .\nA similar approach to those schemes described by Liu et al was independently developed by Du et a. [ 5 ] .\nRather than on Blundo 's scheme their approach is based on Blom 's scheme [ 6 ] .\nAll of those schemes above improve the security over the basic probabilistic key pre-distribution scheme .\nHowever , the pairwise key establishment problem in sensor networks is still not well solved .\nFor the basic probabilistic and the q-composite key predistribution schemes , as the number of compromised nodes increases , the fraction of affected pairwise keys increases quickly .\nAs a result , a small number of compromised nodes may affect a\nlarge fraction of pairwise keys [ 3 ] .\nThough the random pairwise keys scheme doses not suffer from the above security problem , it incurs a high memory overhead , which increases linearly with the number of nodes in the network if the level of security is kept constant [ 2 ] [ 4 ] .\nFor the random subset assignment scheme , it suffers higher communication and computation overheads .\nIn 2004 , Liu proposed a new hypercube-based pairwise key predistribution scheme [ 7 ] , which extends the grid-based scheme from a two dimensional grid to a multi-dimensional hypercube .\nThe analysis shows that hypercube-based scheme keeps some attractive properties of the grid-based scheme , including the guarantee of establishing pairwise keys and the resilience to node compromises .\nAlso , when perfect security against node compromise is required , the hypercube-based scheme can support a larger network by adding more dimensions instead of increasing the storage overhead on sensor nodes .\nThough hypercube-based scheme ( we consider the grid-based scheme is a special case of hypercube-based scheme ) has many attractive properties , it requires any two nodes in sensor networks can communication directly with each other .\nThis strong assumption is impractical in most of the actual applications of the sensor networks .\nIn this paper , we present a kind of new cluster-based distribution model of sensor networks , and for which , we propose a new pairwise key pre-distribution scheme .\nBased on the topology , we propose a novel cluster distribution based hierarchical hypercube model to establish the pairwise key .\nWe develop a kind of new pairwise key establishment algorithm with our hierarchical hypercube model .\nThe structure of this paper is arranged as follows : In section 3 , a new distribution model of cluster deployed sensor networks is presented .\nIn section 4 , a new Hierarchical Hypercube model is proposed .\nIn section 5 , the mapping relationship between the clusters deployed sensor network and Hierarchical Hypercube model is discussed .\nIn section 6 and section 7 , new pairwise key establishment algorithm are designed based on the Hierarchical Hypercube model and detailed analyses are described .\nFinally , section 8 presents a conclusion .\n2 .\nPRELIMINARY\nDefinition 1 ( Key Predistribution ) : The procedure , which is used to encode the corresponding encryption and decryption algorithms in sensor nodes before distribution , is called Key Predistribution .\nDefinition 2 ( Pairwise Key ) : For any two nodes A and B , if they have a common key E , then the key E is called a pairwise key between them .\n8 .\nCONCLUSION\nA new hierarchical hypercube model named H ( k , u , m , v , n ) is proposed , which can be used for pairwise key predistribution for cluster deployed sensor networks .\nAnd Based on the H ( k , u , m , v , n ) model , an innovative pairwise key predistribution scheme and algorithm are designed respectively , by combing the good properties of the Polynomial Key and Key Pool encryption schemes .\nSo , the traditional pairwise key predistribution algorithm based on hypercube model [ 7 ] is only a special case of the new algorithm proposed in this paper .\nTheoretical and experimental analyses show that the newly proposed algorithm is an efficient pairwise key establishment algorithm that is suitable for the cluster deployed sensor networks .", "lvl-2": "Researches on Scheme of Pairwise Key Establishment for Distributed Sensor Networks\nABSTRACT\nSecurity schemes of pairwise key establishment , which enable sensors to communicate with each other securely , play a fundamental role in research on security issue in wireless sensor networks .\nA new kind of cluster deployed sensor networks distribution model is presented , and based on which , an innovative Hierarchical Hypercube model - H ( k , u , m , v , n ) and the mapping relationship between cluster deployed sensor networks and the H ( k , u , m , v , n ) are proposed .\nBy utilizing nice properties of H ( k , u , m , v , n ) model , a new general framework for pairwise key predistribution and a new pairwise key establishment algorithm are designed , which combines the idea of KDC ( Key Distribution Center ) and polynomial pool schemes .\nFurthermore , the working performance of the newly proposed pairwise key establishment algorithm is seriously inspected .\nTheoretic analysis and experimental figures show that the new algorithm has better performance and provides higher possibilities for sensor to establish pairwise key , compared with previous related works .\n1 .\nINTRODUCTION\nSecurity communication is an important requirement in many sensor network applications , so shared secret keys are used between communicating nodes to encrypt data .\nAs one of the most fundamental security services , pairwise key establishment enables the sensor nodes to communicate securely with each other using cryptographic techniques .\nHowever , due to the sensor nodes ' limited computational capabilities , battery energy , and available memory , it is not feasible for them to use traditional pairwise key establishment techniques such as public key cryptography and key distribution center ( KDC ) .\nSeveral\nalternative approaches have been developed recently to perform pairwise key establishment on resource-constrained sensor networks without involving the use of traditional cryptography [ 14 ] .\nEschenauer and Gligor proposed a basic probabilistic key predistribution scheme for pairwise key establishment [ 1 ] .\nIn the scheme , each sensor node randomly picks a set of keys from a key pool before the deployment so that any two of the sensor nodes have a certain probability to share at least one common key .\nChan et al. further extended this idea and presented two key predistribution schemes : a q-composite key pre-distribution scheme and a random pairwise keys scheme .\nThe q-composite scheme requires any two sensors share at least q pre-distributed keys .\nThe random scheme randomly picks pair of sensors and assigns each pair a unique random key [ 2 ] .\nInspired by the studies above and the polynomial-based key pre-distribution protocol [ 3 ] , Liu et al. further developed the idea addressed in the previous works and proposed a general framework of polynomial pool-based key predistribution [ 4 ] .\nThe basic idea can be considered as the combination of the polynomial-based key pre-distribution and the key pool idea used in [ 1 ] ] and [ 2 ] .\nBased on such a framework , they presented two pairwise key pre-distribution schemes : a random subset assignment scheme and a grid-based scheme .\nA polynomial pool is used in those schemes , instead of using a key pool in the previous techniques .\nThe random subset assignment scheme assigns each sensor node the secrets generated from a random subset of polynomials in the polynomial pool .\nThe gridbased scheme associates polynomials with the rows and the columns of an artificial grid , assigns each sensor node to a unique coordinate in the grid , and gives the node the secrets generated from the corresponding row and column polynomials .\nBased on this grid , each sensor node can then identify whether it can directly establish a pairwise key with another node , and if not , what intermediate nodes it can contact to indirectly establish the pairwise key .\nA similar approach to those schemes described by Liu et al was independently developed by Du et a. [ 5 ] .\nRather than on Blundo 's scheme their approach is based on Blom 's scheme [ 6 ] .\nIn some cases , it is essentially equivalent to the one in [ 4 ] .\nAll of those schemes above improve the security over the basic probabilistic key pre-distribution scheme .\nHowever , the pairwise key establishment problem in sensor networks is still not well solved .\nFor the basic probabilistic and the q-composite key predistribution schemes , as the number of compromised nodes increases , the fraction of affected pairwise keys increases quickly .\nAs a result , a small number of compromised nodes may affect a\nlarge fraction of pairwise keys [ 3 ] .\nThough the random pairwise keys scheme doses not suffer from the above security problem , it incurs a high memory overhead , which increases linearly with the number of nodes in the network if the level of security is kept constant [ 2 ] [ 4 ] .\nFor the random subset assignment scheme , it suffers higher communication and computation overheads .\nIn 2004 , Liu proposed a new hypercube-based pairwise key predistribution scheme [ 7 ] , which extends the grid-based scheme from a two dimensional grid to a multi-dimensional hypercube .\nThe analysis shows that hypercube-based scheme keeps some attractive properties of the grid-based scheme , including the guarantee of establishing pairwise keys and the resilience to node compromises .\nAlso , when perfect security against node compromise is required , the hypercube-based scheme can support a larger network by adding more dimensions instead of increasing the storage overhead on sensor nodes .\nThough hypercube-based scheme ( we consider the grid-based scheme is a special case of hypercube-based scheme ) has many attractive properties , it requires any two nodes in sensor networks can communication directly with each other .\nThis strong assumption is impractical in most of the actual applications of the sensor networks .\nIn this paper , we present a kind of new cluster-based distribution model of sensor networks , and for which , we propose a new pairwise key pre-distribution scheme .\nThe main contributions of this paper are as follows : Combining the deployment knowledge of sensor networks and the polynomial pool-based key pre-distribution , we setup a clusterbased topology that is practical with the real deployment of sensor networks .\nBased on the topology , we propose a novel cluster distribution based hierarchical hypercube model to establish the pairwise key .\nThe key contribution is that our scheme dose not require the assumption of all nodes can directly communicate with each other as the previous schemes do , and it still maintains high probability of key establishment , low memory overhead and good security performance .\nWe develop a kind of new pairwise key establishment algorithm with our hierarchical hypercube model .\nThe structure of this paper is arranged as follows : In section 3 , a new distribution model of cluster deployed sensor networks is presented .\nIn section 4 , a new Hierarchical Hypercube model is proposed .\nIn section 5 , the mapping relationship between the clusters deployed sensor network and Hierarchical Hypercube model is discussed .\nIn section 6 and section 7 , new pairwise key establishment algorithm are designed based on the Hierarchical Hypercube model and detailed analyses are described .\nFinally , section 8 presents a conclusion .\n2 .\nPRELIMINARY\nDefinition 1 ( Key Predistribution ) : The procedure , which is used to encode the corresponding encryption and decryption algorithms in sensor nodes before distribution , is called Key Predistribution .\nDefinition 2 ( Pairwise Key ) : For any two nodes A and B , if they have a common key E , then the key E is called a pairwise key between them .\nDefinition 3 ( Key Path ) : For any two nodes A0 and Ak , when there has not a pairwise key between them , if there exists a path A0,A1,A2,...,Ak-1 , Ak , and there exists at least one pairwise key between the nodes Ai and Aj for 05i5k-1 and 15j5k , then the path consisted of A0 , A1 , A2 , ... , Ak-1 , Ak is called a Key Path between A0 and Ak .\nDefinition 4 ( n-dimensional Hypercube ) : An n-dimensional Hypercube ( or n \u2212 cube ) H ( v , n ) is a topology with the following properties : ( 1 ) It is consisted of n \u00b7 vn-1 edges , ( 2 ) Each node can be coded as a string with n positions such as b1b2 ... bn , where 05b1,b2,...,bn5v-1 , ( 3 ) Any two nodes are called neighbors , which means that there is an edge between them , iff there is just one position different between their node codes .\n3 .\nMODEL OF CLUSTERS DEPLOYED SENSOR NETWORKS\nIn some actual applications of sensor networks , sensors can be deployed through airplanes .\nSupposing that the deployment rounds of sensors are k , and the communication radius of any sensors is r , then the sensors deployed in the same round can be regarded as belonging to a same Cluster .\nWe assign a unique cluster number l ( 1 < _ l < _ k ) for each cluster .\nSupposing that the sensors form a connected graph in any cluster after deployment through airplanes , and then the Fig. 1 presents an actual model of clusters deployed sensor networks .\nFigure .1 An actual model of clusters deployed sensor networks .\nFrom Figure .1 , it is easy to know that , for a given node A , there exist lots of nodes in the same cluster of A , which can be communicated directly with A , since the nodes are deployed densely in a cluster .\nBut there exist much less nodes in a cluster neighboring to the cluster of A , which can be communicated directly with A. since the two clusters are not deployed at the same time .\n4 .\nHIERARCHICAL HYPERCUBE MODEL\nDefinition 5 ( k-levels Hierarchical Hypercube ) : Let there are N nodes totally , then a k-levels Hierarchical Hypercube named H ( k , u , m , v , n ) can be constructed as follows :\nas i1i2 ... in , which are called In-Cluster-Hypercube-Node-Codes , where 0 \u2264 i1 , i2 , ... in \u2264 v-1 , v =[ n N/k ] , [ j ] equals to an integer not less than j .\nSo we can obtain k such kind of different hypercubes .\n2 ) The k different hypercubes obtained above are encoded as j1j2 ... jm , which are called Out-Cluster-Hypercube-Node-Codes , where 0 \u2264 j1 , j2 , ... jm \u2264 u-1 , u =[ m k ] .\nAnd the nodes in the k different hypercubes are connected into m-dimensional hypercubes according to the following rules : The nodes with same In-Cluster-Hypercube-Node-Codes and different Out-ClusterHypercube-Node-Codes are connected into an m-dimensional hypercube .\n( The graph constructed through above steps is called a k-levels Hierarchical Hypercube abbreviated as H ( k , u , m , v , n ) . )\n3 ) Any node A in H ( k , u , m , v , n ) can be encoded as ( i , j ) , where i ( i = i1i2 ... in , 0 \u2264 i1 , i2 , ... in \u2264 v-1 ) is the In-Cluster-HypercubeNode-Code of node A , and j ( j = j1j2 ... jm , 0 \u2264 j1 , j2 , ... jm \u2264 u-1 ) is the Out-Cluster-Hypercube-Node-Code of node A. Obviously , the H ( k , u , m , v , n ) model has the following good properties :\nProperty 1 : The diameter of H ( k , u , m , v , n ) model is m + n.\nProof : Since the diameter of n-dimensional hypercube is n , and the diameter of m-dimensional hypercube is m , so it is easy to know that the diameter of H ( k , u , m , v , n ) model is m + n from the definition 5 .\nProperty 2 : The distance between any two nodes A ( i1 , j1 ) and B ( i2 , j2 ) in H ( k , u , m , v , n ) model is d ( A , B ) = dh ( i1 , i2 ) + dh ( j1 , j2 ) , where dh represents the Hamming distance .\nProof : Since the distance between any two nodes in hypercube equals to the Hamming distance between them , so it is obvious that the theorem 2 's conclusion stands from definition 5 .\n5 .\nMAPPING CLUSTERS DEPLOYED SENSOR NETWORKS TO H ( K , U , M , V , N )\nObviously , from the description in section 3 and 4 , we can know that the clusters deployed sensor network can be mapped into a klevels - hierarchical hypercube model as follows : At first , the k clusters in the sensor network can be mapped into k different levels ( or hypercubes ) in the k-levels - hierarchical hypercube model .\nThen , the sensor nodes in each cluster can be encoded with the In-Cluster-Hypercube-Node-Codes , and the sensor nodes in the k different clusters with the same In-ClusterHypercube-Node-Codes can be encoded with the Out-ClusterHypercube-Node-Codes according to the definition 5 respectively .\nConsequently , the whole sensor network has been mapped into a k-levels - hierarchical hypercube model .\n6 .\nH ( K , U , M , V , N ) MODEL-BASED PAIRWISE KEY PREDISTRIBUTION ALGORITHM FOR SENSOR NETWORKS\nIn order to overcome the drawbacks of polynomial-based and polynomial pool-based key predistribution algorithms , this paper proposed an innovative H ( k , u , m , v , n ) model-based key predistribution scheme and pairwise key establishment algorithm , which combines the advantages of polynomial-based and key pool-based encryption schemes , and is based on the KDC and polynomials pool-based key predistribution models .\nThe new H ( k , u , m , v , n ) model-based pairwise key establishment algorithm includes three main steps : ( 1 ) Generation of the polynomials pool and key predistribution , ( 2 ) Direct pairwise key discovery , ( 3 ) Path key discovery .\n6.1 Generation of Polynomials Pool and Key Predistribution\nSupposing that , the sensor network includes N nodes , and is deployed through k different rounds .\nThen we can predistribute keys for each sensor node on the basis of the H ( k , u , m , v , n ) model as follows : Step 1 : Key setup server randomly generates a bivariate polynomials pool such as the following : F ={ f i l , < i 1 , i 2 , ... , i n \u2212 1 > ( x , y ) , f < ji1 , i2 , ... , in , j1 , j2 , ... , jm \u2212 1 >\n+ [ N/vn ] * n * vn-1 different t-degree bivariate polynomials over a finite field Fq , and then assigns a unique polynomial ID to each bivariate polynomial in F. Step 2 : In each round , key setup server assigns a unique node ID : ( i1i2 ... in , j1j2 ... jm ) to each sensor node from small to big , where 0 \u2264 i1 , i2 , ... in \u2264 v-1 , 0 \u2264 j1 , j2 , ... jm \u2264 u-1 .\nStep 3 : key setup server assigns a unique cluster ID : l to all the sensor nodes deployed in the same round , where 1 \u2264 l \u2264 k. Step 4 : key setup server predistributes m + n bivariate polynomials { f l i i in\ncorresponding polynomial IDs to the sensor node deployed in the lth round and with ID ( i1i2 ... in , j1j2 ... jm ) .\n6.2 Direct Pairwise Key Discovery\nIf the node A ( i1i2 ... in , j1j2 ... jm ) in the sensor network wants to establish pairwise key with a node B ( i ' 1i ' 2 ... i 'n , j' 1j ' 2 ... j 'm ) , then node A can establish pairwise key with the node B trough the following methods .\nFirstly , node A computes out the distance between itself and node B : d = d1 + d2 , where d1 = dh ( i1i2 ... in , i ' 1i ' 2 ... i 'n ) and d2 = dh ( j1j2 ... jm , j' 1j ' 2 ... j 'm ) .\nIf d = 1 , then node A obtains the direct pairwise key between itself and node B according to the following theorem 1 : Theorem 1 : For any two sensor nodes A ( i1i2 ... in , j1j2 ... jm ) and B ( i ' 1i ' 2 ... i 'n , j' 1j ' 2 ... j 'm ) in the sensor network , supposing that the\ndistance between nodes A and B is d = d1 + d2 , where d1 = dh ( i1i2 ... in , i ' 1i ' 2 ... i 'n ) and d2 = dh ( j1j2 ... jm , j' 1j ' 2 ... j 'm ) .\nIf d = 1 , then there exists a direct pairwise key between nodes A and B. Poof : Since d = 1 , then there is d1 = 1 , d2 = 0 , or d1 = 0 , d2 = 1 .\n1 ) If d1 = 1 , d2 = 0 : From d2 = 0 , there is nodes A , B belong to the same cluster .\nSupposing that nodes A , B belong to the same cluster l , then from d1 = 1 \u21d2 There is only one position different between i1i2 ... in and i ' 1i ' 2 ... i 'n .\nLet it = i ' t , when 1 \u2264 t \u2264 n-1 , and in \u2260 i 'n \u21d2 f n l , < i 1 , i 2 , ... , i n \u2212 1 > ( in , i 'n ) = f n l , < i \u2032 1 , i \u2032 2 , ... , i \u2032 n \u2212 1 > ( i 'n , in ) .\nSo , there exists a direct pairwise key f nl , < i1 , i2 , ... , in \u2212 1 > ( in , i 'n ) between nodes A and B.\n2 ) If d1 = 0 , d2 = 1 : From d2 = 1 \u21d2 There is only one position different between j1j2 ... jm and j' 1j ' 2 ... j 'm .\nLet jt = j' t , when 1 \u2264 t \u2264 m1 , and jm \u2260 j 'm .\nSince d1 = 0 \u21d2 i1i2 ... in equals to\ni ' 1i ' 2 ... i 'n \u21d2 f < mi 1 , i 2 , ... , in , j1 , j2 , ... , jm \u2212 1 > ( jm , j 'm ) = f < m i \u2032 1 , i \u2032 2 , ... , i \u2032 n , j \u2032 1 , j \u2032 2 , ... , j \u2032 m \u2212 1 > ( j 'm , jm ) .\nSo , there exists a direct pairwise key f < m i 1 , i2 , ... , in , j 1 , j2 , ... , j m \u2212 1 > ( jm , j 'm ) between nodes A and B .\nAccording to theorem 1 , we present the detailed description of the direct pairwise key discovery algorithm as follows : Step 1 : Obtain the node IDs and cluster IDs of the source node A and destination node B ; Step 2 : Compute out the distance between nodes A and B : d = d1 + d2 ; Step 3 : If d1 = 1 , d2 = 0 , then select out a common polynomial share of nodes A and B from { f l i i in\nestablish direct pairwise key ; Step 4 : If d1 = 0 , d2 = 1 , then select out a common polynomial share of nodes A and B from { f i i in j jm 1 < 1,2 , ... , ,2 , ... , > , ... , f < m i1 , i2 , ... , in , j1 , j2 , ... , jm \u2212 1 > } to establish direct pairwise key ; Step 5 : Otherwise , there exists no direct pairwise key between nodes A and B. And then turn to the following path key discovery process .\n6.3 Path Key Discovery\nIf d > 1 , then node A can establish path key with node B according to the following theorem 2 : Theorem 2 : For any two sensor nodes A ( i1i2 ... in , j1j2 ... jm ) and B ( i ' 1i ' 2 ... i 'n , j' 1j ' 2 ... j 'm ) in the sensor network , supposing that the distance between nodes A and B is d = d1 + d2 , where d1 = dh ( i1i2 ... in , i ' 1i ' 2 ... i 'n ) and d2 = dh ( j1j2 ... jm , j' 1j ' 2 ... j 'm ) .\nIf d > 1 , then there exists a path key between nodes A and B. Proof : Let d1 = a , d2 = b , then we can think that it \u2260 i ' t , when 1 \u2264 t \u2264 a ; but it = i ' t , when t > a ; and jt \u2260 j' t , when 1 \u2264 t \u2264 b ; but jt = j' t , when t > b. Obviously , nodes A ( i1i2 ... in , j1j2 ... jm ) , ( i ' 1i2 i3 ... in , j1j2 ... jm ) , ( i ' 1i ' 2 i3 ... in , j1j2 ... jm ) , ... , ( i ' 1i ' 2 ... i 'n , j1j2 ... jm ) belong to the same cluster .\nSo , according to the supposing condition of `` The nodes in the same cluster form a connected graph '' , there is a route among those nodes .\nIn addition , in those nodes , the distance between any two neighboring nodes is 1 , so from theorem 1 , it is easy to know that there exists direct pairwise key between any two neighboring nodes among those nodes .\nFor nodes ( i ' 1i ' 2 ... i 'n , j1j2 ... jm ) , ( i ' 1i ' 2 ... i 'n , j' 1 j2 j3 ... jm ) , ( i ' 1i ' 2 ... i 'n , j' 1j ' 2 j3...jm-1 jm ) , ... , ( i ' 1i ' 2 ... i 'n , j' 1j ' 2 ... j 'm -1 jm ) , since they have the same Out-Cluster-Hypercube-Node-Codes with the node B ( i ' 1i ' 2 ... i 'n , j' 1j ' 2 ... j 'm ) , so nodes ( i ' 1i ' 2 ... i 'n , j1j2 ... jm ) , ( i ' 1i ' 2 ... i 'n , j' 1 j2 j3 ... jm ) , ( i ' 1i ' 2 ... i 'n , j' 1j ' 2 j3...jm-1 jm ) , ... , ( i ' 1i ' 2 ... i 'n , j' 1j ' 2 ... j 'm -1 jm ) and node B belong to a same logical hypercube .\nObviously , from the supposing condition of `` The whole sensor network forms a connected graph '' , there is a route among those nodes .\nIn addition , in those nodes , the distance between any two neighboring nodes is 1 , so from theorem 1 , it is easy to know that there exists direct pairwise key between any two neighboring nodes among those nodes .\nSo , it is obvious that there exists a path key between nodes A and B .\nAccording to theorem 2 , we present the detailed description of the path key discovery algorithm as follows :\ntheir common polynomial share to establish direct pairwise key .\nFrom theorem 2 , it is easy to know that any source node A can compute out a key path P to the destination node B according to the above algorithm , when there are no compromised nodes in the sensor network .\nOnce the key path P is computed out , then node A can send messages to B along the path P to establish indirect pairwise key with node B. Fig. 2 presents a example of key path establishment .\nFigure .2 Key path establishment example .\nFor example : In the above Figure .2 , node A ( ( 012 ) , ( 1234 ) ) can establish pairwise key with node B ( ( 121 ) , ( 2334 ) ) through the following key path : A ( ( 012 ) , ( 1234 ) ) \u2192 C ( ( 112 ) , ( 1234 ) ) \u2192 D ( ( 122 ) , ( 1234 ) ) \u2192 E ( ( 121 ) , ( 1234 ) ) \u2192 F ( ( 121 ) , ( 2234 ) ) \u2192 B ( ( 121 ) , ( 2334 ) ) , where node F shall route through nodes G , H , I , J to establish direct pairwise key with node B.\nAccording to the properties of H ( k , u , m , v , n ) model , we can prove that the following theorem by combing the proof of theorem 2 : Theorem 3 : Supposing that there exist no compromised nodes in the sensor network , and the distance between node A and B , then there exists a shortest key path with k distance between node A and B logically .\nThat is to say , node A can establish indirect pairwise key with node B through t-1 intermediate nodes .\nProof : Supposing that the distance between node A ( i1i2 ... in , j1j2 ... jm ) and B ( i ' 1i ' 2 ... i 'n , j' 1j ' 2 ... j 'm ) is d = d1 + d2 , where d1 = dh ( i1i2 ... in , i ' 1i ' 2 ... i 'n ) , d2 = dh ( j1j2 ... jm , j' 1j ' 2 ... j 'm ) .\nSince d = t , according to the construction properties of H ( k , u , m , v , n ) , it is easy to know that there exist t-1 intermediate nodes I1,...,It-1 , in the logical space H ( k , u , m , v , n ) , which satisfy that the distance between any two neighboring nodes in the nodes series A , I1 , ... , It1 , B equals to 1 .\nSo according to the theorem 1 , we can know that nodes A , I1,...,It-1 , B form a correct key path between node A and B .\nIf any two neighboring nodes in the nodes series A , I1,...,It-1 , B can communicate directly , then node A can establish indirect pairwise key with node B through those t-1 intermediate nodes .\n6.4 Dynamic Path Key Discovery\nThe path key discovery algorithm proposed in the above section can establish a key path correctly , only when there exist no compromised nodes in the whole sensor network , since the key path is computed out beforehand .\nAnd the proposed algorithm can not find an alternative key path when there exist some compromised nodes or some intermediate nodes not in the communication radius , even that there exists other alternative key paths in the sensor network .\nFrom the following example we can know that there are many parallel paths in the H ( k , u , m , v , n ) model for any two given source and destination nodes , since the H ( k , u , m , v , n ) model is high fault-tolerant [ 9,10 ] .\nFigure .3 Alternative key path establishment example .\nFor example : Considering the key path establishment example given in the above section based on Figure .2 :\nsupposing that node F ( ( 121 ) , ( 2234 ) ) has compromised , then from Figure .3 , we can know that there exists another alternative key path as A ( ( 012 ) , ( 1234 ) ) \u2192 C ( ( 112 ) , ( 1234 ) ) \u2192 D ( ( 122 ) , ( 1234 ) ) \u2192 E ( ( 121 ) , ( 1234 ) ) \u2192 M ( ( 121 ) , ( 1334 ) ) \u2192 B ( ( 121 ) , ( 2334 ) ) , which can be used to establish the indirect pairwise key between node A and B , where node E shall route through nodes D and K to establish direct pairwise key with node M , and node M shall route through nodes N , O , G , H , I , J to establish direct pairwise key with node B .\nSince the sensors are source limited , so they are easy to die or out of the communication radius , therefore the algorithm proposed in the above section can not guarantee to establish correct key path efficiently .\nIn this section , we will propose a dynamic path key discovery algorithm as follows , which can improve the probability of key path effectively : Algorithm I : Dynamic key path establishment algorithm based on H ( k , u , m , v , n ) model for cluster deployed sensor networks .\nInput : Sub-sensor network H ( k , u , m , v , n ) , which has some compromised / fault sensors and fault links , And two reachable nodes A ( a1 ... an , a ' 1 ... a 'm ) and B ( b1 ... bn , b ' 1 ... b 'm ) in H ( k , u , m , v , n ) , where a ' t # b ' t , tE [ 1 , s ] , a ' t = b ' t , t > s.\nOutput : A correct key path from node A to B in H ( k , u , m , v , n ) .\nStep 1 : Obtain the code strings of node A and B : A F ( a1 ... an , a ' 1 ... a 'm ) , B F ( b1 ... bn , b ' 1 ... b 'm ) , where aj , bj \u2208 [ 0,u-1 ] , a ' j , b ' j \u2208 [ 0,v-1 ] .\nStep 2 : If a ' 1 ... a 'm = b ' 1 ... b 'm , then node A can find a route to B according to the routing algorithms of hypercube [ 9-10 ] .\nStep 3 : Otherwise , node A can find a route to C ( b1 ... bn , a ' 1 ... a 'm ) according to the Algorithm I or Algorithm II .\nThen let I0 = C ( b1 ... bn , a ' 1 ... a 'm ) , I1 = ( b1 ... bn , b ' 1 a ' 2 ... a 'm ) , ... , Is = B ( b1 ... bn , b ' 1 b ' 2 ... b 's a 's +1 ... a 'm ) , and each node It in the above nodes series find a route to its neighboring node It +1 on the basis of the location information ( Detailed routing algorithms based on location information can see the references [ 11-14 ] ) .\nStep 4 : Algorithm exits .\nIf such kind of a correct key path exists , then through which node A can establish an indirect pairwise key with node B. Otherwise , node A fails to establish an indirect pairwise key with node B. And node A will tries again to establish an indirect pairwise key with node B some time later .\n7 .\nALGORITHM ANALYSES\n7.1 Practical Analyses\nAccording to the former description and analyses , it is easy to know that the above newly proposed algorithm has the following properties : Property 3 : When there exist no fault and compromised nodes , by using new pairwise key predistribution scheme based on H ( k , u , m , v , n ) model , the probability of direct pairwise key establishment between any two nodes can be estimated as P = ( m ( u-1 ) + n ( v-1 ) ) / ( N-1 ) , where N is the total number of nodes in the sensor network , and N = um * vn .\nProof : Since the predistributed pairwise keys for any node FA\n, y ) , ... , f < m i1 , i2 , ... , in , j 1 , ... , jm \u2212 1 > ( jm , y ) } in the newly proposed algorithm .\nObviously , in the logical hypercube formed by the nodes in the same cluster of node A , there are n ( v-1 ) nodes , which\nhave direct pairwise key with node A. And in the logical hypercube formed by the nodes in different clusters from that of node A , there are m ( u-1 ) nodes , which have direct pairwise key with node A. Therefore , there are totally m ( u-1 ) + n ( v-1 ) nodes , which have direct pairwise key with node A. So , the probability of pairwise key establishment between any two nodes can be estimated as P = ( m ( u-1 ) + n ( v-1 ) ) / ( N-1 ) , since the whole sensor network has N sensor nodes in all .\nFigure .4 presents the comparision between the probability of direct pairwise key establishment between any two nodes and the dimension n , when the sensor network has different total nodes , and use the new pairwise key predistribution scheme based on H ( 8,2,3 , v , n ) model .\nFigure .4 Comparision between the probability of direct pairwise key establishment between any two nodes and the dimension n , when the sensor network has different total nodes , and use the new pairwise key predistribution scheme based on H ( 8,2,3 , v , n ) model .\nFrom Figure .4 , it is easy to know that by using new pairwise key predistribution scheme based on H ( k , u , m , v , n ) model , the probability of direct pairwise key establishment between any two nodes decreases with the increasing of the scale of the sensor networks , and in addition , the probability of direct pairwise key establishment between any two nodes decreases with the increasing of the dimension n , when the scale of the sensor network is fixed .\nTheorem 4 : Supposing that the total sensors is N in the sensor network , then when u > -- v2 , the probability of direct pairwise key establishment between any two nodes , when using the key distribution scheme based on the hypercube model H ( v , p ) , is smaller than that when using the key distribution scheme based on the H ( k , u , m , v , n ) model .\nProof : Since u > -- v , then we can let u = vt , where t > -- 2 .\nSince the total number of nodes in H ( v , p ) is vp = N , the total number of nodes in H ( k , u , m , v , n ) is um * vn = N. Let p = x + n , then there is um * vn = vx * vn = > um = vx = > x = tm .\nFrom the property 3 , it is easy to know that the probability of direct pairwise key establishment between any two nodes can be estimated as P = ( m ( u-1 ) + n ( v-1 ) ) / ( N-1 ) .\nAccording to the description in [ 7 ] , it is well know that the probability of direct pairwise key establishment between any two nodes can be estimated as P ' = p ( v-1 ) / ( N-1 ) = ( x ( v-1 ) + n ( v-1 ) ) / ( N-1 ) .\nNext , we will prove that m ( u-1 ) > -- x ( v-1 ) :\nTherefore , the conclusion of the theorem stands .\nAs for the conclusion of theorem 4 , we give an example to illustrate .\nSupposing that the total number of nodes in the sensor network is N = 214 , and H ( k , u , m , v , n ) = H ( 16,4,2,2,10 ) , H ( v , p ) = H ( 10,14 ) , then the probability of direct pairwise key establishment between any two nodes based on the H ( k , u , m , v , n ) model is P = ( m ( u-1 ) + n ( v1 ) ) / ( N-1 ) = ( 2 ( 4-1 ) +10 ( 2-1 ) ) / ( 214-1 ) = 16 / ( 214-1 ) , but the probability of direct pairwise key establishment between any two nodes based on the H ( v , p ) model is P ' = p ( v-1 ) / ( N-1 ) = 14 ( 2-1 ) / ( 214-1 ) = 14 / ( 2141 ) .\nSupposing that the total number of nodes in the sensor network is N , Figure .5 illustrates the comparison between the probability of direct pairwise key establishment between any two nodes based on the H ( k , u , m , v , n ) model and the probability of direct pairwise key establishment between any two nodes based on the H ( v , p ) model , when u = 4 and v = 2 .\n01 2 3 4 5 6 7 8 9 10 scale of the sensor network Figure .5 Comparison between the probability of direct pairwise key establishment between H ( v , n ) and H ( k , u , m , v , n ) models .\nFrom Figure .5 , it is easy to know that the theorem 5 stands .\nTheorem 5 : Supposing that the total sensors is N in the sensor network , then the pairwise key distribution scheme based on the hypercube model H ( v , p ) , is only a special case of the pairwise key distribution scheme based on the H ( k , u , m , v , n ) model .\nProof : As for the pairwise key distribution scheme based on the H ( k , u , m , v , n ) model , let k = 1 ( u = 1 , m = 0 ) , which means that the total sensor network includes only one cluster .\nThen obviously , the H ( k , u , m , v , n ) model will degrade into the H ( v , n ) model .\nAccording to the former anayses in this paper and the definition of the pairwise key distribution scheme based on the hypercube model H ( v , p ) in [ 7 ] , it is easy to know that the conclusion of the theorem stands .\n7.2 Security Analyses\nBy using the pairwise key establishment algorithm based on the H ( k , u , m , v , n ) model , the intruders can launch two kinds of attacks : 1 ) The attackers may target the pairwise key between two particular sensor node , in order to compromise the pairwise key between them , or prevent them to establish pairwise key .\n2 ) The attackers may attack against the whole sensor network , inorder to decrease the probability of the pairwise key establishment , or increase the cost of the pairwise key establishment .\nAttacks against a Pair of sensor nodes\n1 .\nSupposing that the intruders want to attack two particular sensor nodes u , v , where u , v are all not compromised nodes , but the intruders want to compromise the pairwise key between them .\n1 ) If u , v can establish direct pairwise key , then the only way to compromise the key is to compromise the common bivariate polynomial f ( x , y ) between u , v .\nSince the degree of the bivariate polynomial f ( x , y ) is t , so the intruders need to compromise at least t +1 sensor nodes that have a share of the bivariate polynomial f ( x , y ) .\n2 ) If u , v can establish indirect pairwise key through intermediate nodes , then the intruders need to compromise at least one intermediate node , or compromise the common bivariate polynomial f ( x , y ) between two neighboring intermediate nodes .\nBut even if the intruders succeed to do that , node u and v can still reestablish indirect pairwise key through alternative intermediate nodes .\n2 .\nSupposing that the intruders want to attack two particular sensor nodes u , v , where u , v are all not compromised nodes , but the intruders want to prevent them to establish the pairwise key .\nThen , the intruders need to compromise all of the m + n bivariate polynomials of node u or v .\nSince the degree of the bivariate polynomial f ( x , y ) is t , so for bivariate polynomial , the intruders need to compromise at least t +1 sensor nodes that have a share of the given bivariate polynomial .\nTherefore , the intruders need to compromise ( m + n ) ( t +1 ) sensor nodes altogether to prevent u , v to establish the pairwise key .\nAttacks against the sensor network\nSupposing that the Attackers know the distribution of the polynomials over sensor nodes , it may systematically attack the network by compromising the polynomials in F one by one in order to compromise the entire network .\nAssume the fraction of the compromised polynomials is pc , then there are up to n N ' = pcx { v m u nx x m + [ ] x x } = pcxNx ( m + n ) N n vvn Sensor nodes that have at least one compromised polynomial share .\nAmong all of the remaining N - N ' sensor nodes , none of them includes a compromised polynomial share .\nSo , the remaining N - N ' sensor nodes can establish direct pairwise key by using any one of their polynomial shares .\nHowever , the indirect pairwise keys in the remaining N - N ' sensor nodes may be affected .\nAnd they may need to re-establish a new indirect pairwise key between them by select alternative intermediate nodes that do not belong to the N ' compromised nodes .\nSupposing that the scale of the sensor network is N = 10000 , Figure .6 presents the comparison between pc and the number of sensor nodes with at least one compromised polynomial share in sensor networks based on different H ( k , u , m , v , n ) distribution models .\nFraction of Compromised B ivariate Polynomials\nFigure .6 the comparison between pc and the number of sensor nodes with at least one compromised polynomial share in sensor networks based on different H ( k , u , m , v , n ) distribution models .\nTheorem 6 : Supposing that the total sensors is N in the sensor network , and the fraction of compromised nodes is pc , then when u > v , the number of affected nodes of the H ( v , p ) model based key predistribution scheme , is bigger than that of the H ( k , u , m , v , n ) model based key predistribution scheme .\nProof : Since the number of affected nodes of the H ( k , u , m , v , n ) model based key predistribution scheme is pcxNx ( m + n ) , and it is proved in [ 7 ] that the number of affected nodes of the H ( v , p ) model based key predistribution scheme is pcxNx p. Let p = x + n , then there is um * vn = vx * vn = > um = vx .\nSince u > v = > x > m = > pcxNx ( m + n ) < pcxNx ( x + n ) = pcxNx p. Supposing that the scale of the sensor network is N = 10000 , Figure .7 presents the comparison between pc and the number of sensor nodes with at least one compromised polynomial share in sensor networks based on H ( 9,3,2,2 , n ) and H ( 2 , p ) distribution models .\nFrom Figure .7 , it is easy to know that the conclusion of theorem 9 is correct , and the number of the affected sensor nodes in the sensor network increases with the increasing of the number of compromised nodes , when the scale of the sensor network is fixed .\nFrom Figure .6 , it is easy to know that , when the scale of the sensor network is fixed , the number of the affected sensor nodes in the sensor network increases with the increasing of the number of compromised nodes .\nFraction of Compromised Bivariate Polynomials\nFigure .7 the comparison between pc and the number of sensor nodes with at least one compromised polynomial share in sensor networks based on H ( 9,3,2,2 , n ) and H ( 2 , p ) distribution models .\n8 .\nCONCLUSION\nA new hierarchical hypercube model named H ( k , u , m , v , n ) is proposed , which can be used for pairwise key predistribution for cluster deployed sensor networks .\nAnd Based on the H ( k , u , m , v , n ) model , an innovative pairwise key predistribution scheme and algorithm are designed respectively , by combing the good properties of the Polynomial Key and Key Pool encryption schemes .\nThe new algorithm uses the good characteristics of node codes and high fault-tolerance of H ( k , u , m , v , n ) model to route and predistribute pairwise keys , in which nodes are not needed to be able to communicate with each other directly such as that the algorithms proposed by [ 7 ] shall need .\nSo , the traditional pairwise key predistribution algorithm based on hypercube model [ 7 ] is only a special case of the new algorithm proposed in this paper .\nTheoretical and experimental analyses show that the newly proposed algorithm is an efficient pairwise key establishment algorithm that is suitable for the cluster deployed sensor networks ."} {"id": "J-10", "title": "", "abstract": "", "keyphrases": ["onlin review", "reput mechan", "featur-by-featur estim of qualiti", "clear incent absenc", "product util", "brag-and-moan model", "rate", "great probabl bi-modal", "u-shape distribut", "semant orient of product evalu", "correl", "larg span of time"], "prmu": [], "lvl-1": "Understanding User Behavior in Online Feedback Reporting Arjun Talwar Ecole Polytechnique F\u00b4ed\u00b4erale de Lausanne (EPFL) Artificial Intelligence Lab Lausanne, Switzerland arjun@math.stanford.edu Radu Jurca Ecole Polytechnique F\u00b4ed\u00b4erale de Lausanne (EPFL) Artificial Intelligence Lab Lausanne, Switzerland radu.jurca@epfl.ch Boi Faltings Ecole Polytechnique F\u00b4ed\u00b4erale de Lausanne (EPFL) Artificial Intelligence Lab Lausanne, Switzerland boi.faltings@epfl.ch ABSTRACT Online reviews have become increasingly popular as a way to judge the quality of various products and services.\nPrevious work has demonstrated that contradictory reporting and underlying user biases make judging the true worth of a service difficult.\nIn this paper, we investigate underlying factors that influence user behavior when reporting feedback.\nWe look at two sources of information besides numerical ratings: linguistic evidence from the textual comment accompanying a review, and patterns in the time sequence of reports.\nWe first show that groups of users who amply discuss a certain feature are more likely to agree on a common rating for that feature.\nSecond, we show that a user``s rating partly reflects the difference between true quality and prior expectation of quality as inferred from previous reviews.\nBoth give us a less noisy way to produce rating estimates and reveal the reasons behind user bias.\nOur hypotheses were validated by statistical evidence from hotel reviews on the TripAdvisor website.\nCategories and Subject Descriptors J.4 [Social and Behavioral Sciences]: Economics General Terms Economics, Experimentation, Reliability 1.\nMOTIVATIONS The spread of the internet has made it possible for online feedback forums (or reputation mechanisms) to become an important channel for Word-of-mouth regarding products, services or other types of commercial interactions.\nNumerous empirical studies [10, 15, 13, 5] show that buyers seriously consider online feedback when making purchasing decisions, and are willing to pay reputation premiums for products or services that have a good reputation.\nRecent analysis, however, raises important questions regarding the ability of existing forums to reflect the real quality of a product.\nIn the absence of clear incentives, users with a moderate outlook will not bother to voice their opinions, which leads to an unrepresentative sample of reviews.\nFor example, [12, 1] show that Amazon1 ratings of books or CDs follow with great probability bi-modal, U-shaped distributions where most of the ratings are either very good, or very bad.\nControlled experiments, on the other hand, reveal opinions on the same items that are normally distributed.\nUnder these circumstances, using the arithmetic mean to predict quality (as most forums actually do) gives the typical user an estimator with high variance that is often false.\nImproving the way we aggregate the information available from online reviews requires a deep understanding of the underlying factors that bias the rating behavior of users.\nHu et al. [12] propose the Brag-and-Moan Model where users rate only if their utility of the product (drawn from a normal distribution) falls outside a median interval.\nThe authors conclude that the model explains the empirical distribution of reports, and offers insights into smarter ways of estimating the true quality of the product.\nIn the present paper we extend this line of research, and attempt to explain further facts about the behavior of users when reporting online feedback.\nUsing actual hotel reviews from the TripAdvisor2 website, we consider two additional sources of information besides the basic numerical ratings submitted by users.\nThe first is simple linguistic evidence from the textual review that usually accompanies the numerical ratings.\nWe use text-mining techniques similar to [7] and [3], however, we are only interested in identifying what aspects of the service the user is discussing, without computing the semantic orientation of the text.\nWe find that users who comment more on the same feature are more likely to agree on a common numerical rating for that particular feature.\nIntuitively, lengthy comments reveal the importance of the feature to the user.\nSince people tend to be more knowledgeable in the aspects they consider important, users who discuss a given feature in more details might be assumed to have more authority in evaluating that feature.\nSecond we investigate the relationship between a review 1 http://www.amazon.com 2 http://www.tripadvisor.com/ 134 Figure 1: The TripAdvisor page displaying reviews for a popular Boston hotel.\nName of hotel and advertisements were deliberatively erased.\nand the reviews that preceded it.\nA perusal of online reviews shows that ratings are often part of discussion threads, where one post is not necessarily independent of other posts.\nOne may see, for example, users who make an effort to contradict, or vehemently agree with, the remarks of previous users.\nBy analyzing the time sequence of reports, we conclude that past reviews influence the future reports, as they create some prior expectation regarding the quality of service.\nThe subjective perception of the user is influenced by the gap between the prior expectation and the actual performance of the service [17, 18, 16, 21] which will later reflect in the user``s rating.\nWe propose a model that captures the dependence of ratings on prior expectations, and validate it using the empirical data we collected.\nBoth results can be used to improve the way reputation mechanisms aggregate the information from individual reviews.\nOur first result can be used to determine a featureby-feature estimate of quality, where for each feature, a different subset of reviews (i.e., those with lengthy comments of that feature) is considered.\nThe second leads to an algorithm that outputs a more precise estimate of the real quality.\n2.\nTHE DATA SET We use in this paper real hotel reviews collected from the popular travel site TripAdvisor.\nTripAdvisor indexes hotels from cities across the world, along with reviews written by travelers.\nUsers can search the site by giving the hotel``s name and location (optional).\nThe reviews for a given hotel are displayed as a list (ordered from the most recent to the oldest), with 5 reviews per page.\nThe reviews contain: \u2022 information about the author of the review (e.g., dates of stay, username of the reviewer, location of the reviewer); \u2022 the overall rating (from 1, lowest, to 5, highest); \u2022 a textual review containing a title for the review, free comments, and the main things the reviewer liked and disliked; \u2022 numerical ratings (from 1, lowest, to 5, highest) for different features (e.g., cleanliness, service, location, etc.) Below the name of the hotel, TripAdvisor displays the address of the hotel, general information (number of rooms, number of stars, short description, etc), the average overall rating, the TripAdvisor ranking, and an average rating for each feature.\nFigure 1 shows the page for a popular Boston hotel whose name (along with advertisements) was explicitly erased.\nWe selected three cities for this study: Boston, Sydney and Las Vegas.\nFor each city we considered all hotels that had at least 10 reviews, and recorded all reviews.\nTable 1 presents the number of hotels considered in each city, the total number of reviews recorded for each city, and the distribution of hotels with respect to the star-rating (as available on the TripAdvisor site).\nNote that not all hotels have a star-rating.\nTable 1: A summary of the data set.\nCity # Reviews # Hotels # of Hotels with 1,2,3,4 & 5 stars Boston 3993 58 1+3+17+15+2 Sydney 1371 47 0+0+9+13+10 Las Vegas 5593 40 0+3+10+9+6 For each review we recorded the overall rating, the textual review (title and body of the review) and the numerical rating on 7 features: Rooms(R), Service(S), Cleanliness(C), Value(V), Food(F), Location(L) and Noise(N).\nTripAdvisor does not require users to submit anything other than the overall rating, hence a typical review rates few additional features, regardless of the discussion in the textual comment.\nOnly the features Rooms(R), Service(S), Cleanliness(C) and Value(V) are rated by a significant number of users.\nHowever, we also selected the features Food(F), Location(L) and Noise(N) because they are referred to in a significant number of textual comments.\nFor each feature we record the numerical rating given by the user, or 0 when the rating is missing.\nThe typical length of the textual comment amounts to approximately 200 words.\nAll data was collected by crawling the TripAdvisor site in September 2006.\n2.1 Formal notation We will formally refer to a review by a tuple (r, T) where: \u2022 r = (rf ) is a vector containing the ratings rf \u2208 {0, 1, ... 5} for the features f \u2208 F = {O, R, S, C, V, F, L, N}; note that the overall rating, rO, is abusively recorded as the rating for the feature Overall(O); \u2022 T is the textual comment that accompanies the review.\n135 Reviews are indexed according to the variable i, such that (ri , Ti ) is the ith review in our database.\nSince we don``t record the username of the reviewer, we will also say that the ith review in our data set was submitted by user i.\nWhen we need to consider only the reviews of a given hotel, h, we will use (ri(h) , Ti(h) ) to denote the ith review about the hotel h. 3.\nEVIDENCE FROM TEXTUAL COMMENTS The free textual comments associated to online reviews are a valuable source of information for understanding the reasons behind the numerical ratings left by the reviewers.\nThe text may, for example, reveal concrete examples of aspects that the user liked or disliked, thus justifying some of the high, respectively low ratings for certain features.\nThe text may also offer guidelines for understanding the preferences of the reviewer, and the weights of different features when computing an overall rating.\nThe problem, however, is that free textual comments are difficult to read.\nUsers are required to scroll through many reviews and read mostly repetitive information.\nSignificant improvements would be obtained if the reviews were automatically interpreted and aggregated.\nUnfortunately, this seems a difficult task for computers since human users often use witty language, abbreviations, cultural specific phrases, and the figurative style.\nNevertheless, several important results use the textual comments of online reviews in an automated way.\nUsing well established natural language techniques, reviews or parts of reviews can be classified as having a positive or negative semantic orientation.\nPang et al. [2] classify movie reviews into positive/negative by training three different classifiers (Naive Bayes, Maximum Entropy and SVM) using classification features based on unigrams, bigrams or part-of-speech tags.\nDave et al. [4] analyze reviews from CNet and Amazon, and surprisingly show that classification features based on unigrams or bigrams perform better than higher-order n-grams.\nThis result is challenged by Cui et al. [3] who look at large collections of reviews crawled from the web.\nThey show that the size of the data set is important, and that bigger training sets allow classifiers to successfully use more complex classification features based on n-grams.\nHu and Liu [11] also crawl the web for product reviews and automatically identify product attributes that have been discussed by reviewers.\nThey use Wordnet to compute the semantic orientation of product evaluations and summarize user reviews by extracting positive and negative evaluations of different product features.\nPopescu and Etzioni [20] analyze a similar setting, but use search engine hit-counts to identify product attributes; the semantic orientation is assigned through the relaxation labeling technique.\nGhose et al. [7, 8] analyze seller reviews from the Amazon secondary market to identify the different dimensions (e.g., delivery, packaging, customer support, etc.) of reputation.\nThey parse the text, and tag the part-of-speech for each word.\nFrequent nouns, noun phrases and verbal phrases are identified as dimensions of reputation, while the corresponding modifiers (i.e., adjectives and adverbs) are used to derive numerical scores for each dimension.\nThe enhanced reputation measure correlates better with the pricing information observed in the market.\nPavlou and Dimoka [19] analyze eBay reviews and find that textual comments have an important impact on reputation premiums.\nOur approach is similar to the previously mentioned works, in the sense that we identify the aspects (i.e., hotel features) discussed by the users in the textual reviews.\nHowever, we do not compute the semantic orientation of the text, nor attempt to infer missing ratings.\nWe define the weight, wi f , of feature f \u2208 F in the text Ti associated with the review (ri , Ti ), as the fraction of Ti dedicated to discussing aspects (both positive and negative) related to feature f.\nWe propose an elementary method to approximate the values of these weights.\nFor each feature we manually construct the word list Lf containing approximately 50 words that are most commonly associated to the feature f.\nThe initial words were selected from reading some of the reviews, and seeing what words coincide with discussion of which features.\nThe list was then extended by adding all thesaurus entries that were related to the initial words.\nFinally, we brainstormed for missing words that would normally be associated with each of the features.\nLet Lf \u2229Ti be the list of terms common to both Lf and Ti.\nEach term of Lf is counted the number of times it appears in Ti , with two exceptions: \u2022 in cases where the user submits a title to the review, we account for the title text by appending it three times to the review text Ti .\nThe intuitive assumption is that the user``s opinion is more strongly reflected in the title, rather than in the body of the review.\nFor example, many reviews are accurately summarized by titles such as Excellent service, terrible location or Bad value for money; \u2022 certain words that occur only once in the text are counted multiple times if their relevance to that feature is particularly strong.\nThese were ``root'' words for each feature (e.g., ``staff'' is a root word for the feature Service), and were weighted either 2 or 3.\nEach feature was assigned up to 3 such root words, so almost all words are counted only once.\nThe list of words for the feature Rooms is given for reference in Appendix A.\nThe weight wi f is computed as: wi f = |Lf \u2229 Ti| f\u2208F |Lf \u2229 Ti| (1) where |Lf \u2229Ti | is the number of terms common to Lf and Ti .\nThe weight for the feature Overall was set to min{ |T i | 5000 , 1} where |Ti | is the number of character in Ti .\nThe following is a TripAdvisor review for a Boston hotel (the name of the hotel is omitted): I``ll start by saying that I``m more of a Holiday Inn person than a *** type.\nSo I get frustrated when I pay double the room rate and get half the amenities that I``d get at a Hampton Inn or Holiday Inn.\nThe location was definitely the main asset of this place.\nIt was only a few blocks from the Hynes Center subway stop and it was easy to walk to some good restaurants in the Back Bay area.\nBoylston isn``t far off at all.\nSo I had no trouble with foregoing a rental car and taking the subway from the airport to the hotel and using the subway for any other travel.\nOtherwise, they make you pay for anything and everything.\n136 And when you``ve already dropped $215/night on the room, that gets frustrating.The room itself was decent, about what I would expect.\nStaff was also average, not bad and not excellent.\nAgain, I think you``re paying for location and the ability to walk to a lot of good stuff.\nBut I think next time I``ll stay in Brookline, get more amenities, and use the subway a bit more.\nThis numerical ratings associated to this review are rO = 3, rR = 3, rS = 3, rC = 4, rV = 2 for features Overall(O), Rooms(R), Service(S), Cleanliness(C) and Value(V) respectively.\nThe ratings for the features Food(F), Location(L) and Noise(N) are absent (i.e., rF = rL = rN = 0).\nThe weights wf are computed from the following lists of common terms: LR \u2229 T ={room}; wR = 0.066 LS \u2229 T ={3 * Staff, amenities}; wS = 0.267 LC \u2229 T = \u2205; wC = 0 LV \u2229 T ={$, rate}; wV = 0.133 LF \u2229 T ={restaurant}; wF = 0.067 LL \u2229 T ={2 * center, 2 * walk, 2 * location, area}; wL = 0.467 LN \u2229 T = \u2205; wN = 0 The root words ``Staff'' and ``Center'' were tripled and doubled respectively.\nThe overall weight of the textual review is wO = 0.197.\nThese values account reasonably well for the weights of different features in the discussion of the reviewer.\nOne point to note is that some terms in the lists Lf possess an inherent semantic orientation.\nFor example the word ``grime'' (belonging to the list LC ) would be used most often to assert the presence, and not the absence of grime.\nThis is unavoidable, but care was taken to ensure words from both sides of the spectrum were used.\nFor this reason, some lists such as LR contain only nouns of objects that one would typically describe in a room (see Appendix A).\nThe goal of this section is to analyse the influence of the weights wi f on the numerical ratings ri f .\nIntuitively, users who spent a lot of their time discussing a feature f (i.e., wi f is high) had something to say about their experience with regard to this feature.\nObviously, feature f is important for user i.\nSince people tend to be more knowledgeable in the aspects they consider important, our hypothesis is that the ratings ri f (corresponding to high weights wi f ) constitute a subset of expert ratings for feature f. Figure 2 plots the distribution of the rates r i(h) C with respect to the weights w i(h) C for the cleanliness of a Las Vegas hotel, h. Here, the high ratings are restricted to the reviews that discuss little the cleanliness.\nWhenever cleanliness appears in the discussion, the ratings are low.\nMany hotels exhibit similar rating patterns for various features.\nRatings corresponding to low weights span the whole spectrum from 1 to 5, while the ratings corresponding to high weights are more grouped together (either around good or bad ratings).\nWe therefore make the following hypothesis: Hypothesis 1.\nThe ratings ri f corresponding to the reviews where wi f is high, are more similar to each other than to the overall collection of ratings.\nTo test the hypothesis, we take the entire set of reviews, and feature by feature, we compute the standard deviation of the ratings with high weights, and the standard deviation of the entire set of ratings.\nHigh weights were defined as those belonging to the upper 20% of the weight range for the corresponding feature.\nIf Hypothesis 1 were true, the standard deviation of all ratings should be higher than the standard deviation of the ratings with high weights.\n0 1 2 3 4 5 6 0 0.1 0.2 0.3 0.4 0.5 0.6 Rating Weight Figure 2: The distribution of ratings against the weight of the cleanliness feature.\nWe use a standard T-test to measure the significance of the results.\nCity by city and feature by feature, Table 2 presents the average standard deviation of all ratings, and the average standard deviation of ratings with high weights.\nIndeed, the ratings with high weights have lower standard deviation, and the results are significant at the standard 0.05 significance threshold (although for certain cities taken independently there doesn``t seem to be a significant difference, the results are significant for the entire data set).\nPlease note that only the features O,R,S,C and V were considered, since for the others (F, L, and N) we didn``t have enough ratings.\nTable 2: Average standard deviation for all ratings, and average standard deviation for ratings with high weights.\nIn square brackets, the corresponding p-values for a positive difference between the two.\nCity O R S C V all 1.189 0.998 1.144 0.935 1.123 Boston high 0.948 0.778 0.954 0.767 0.891 p-val [0.000] [0.004] [0.045] [0.080] [0.009] all 1.040 0.832 1.101 0.847 0.963 Sydney high 0.801 0.618 0.691 0.690 0.798 p-val [0.012] [0.023] [0.000] [0.377] [0.037] all 1.272 1.142 1.184 1.119 1.242 Vegas high 1.072 0.752 1.169 0.907 1.003 p-val [0.0185] [0.001] [0.918] [0.120] [0.126] Hypothesis 1 not only provides some basic understanding regarding the rating behavior of online users, it also suggests some ways of computing better quality estimates.\nWe can, for example, construct a feature-by-feature quality estimate with much lower variance: for each feature we take the subset of reviews that amply discuss that feature, and output as a quality estimate the average rating for this subset.\nInitial experiments suggest that the average feature-by-feature ratings computed in this way are different from the average ratings computed on the whole data set.\nGiven that, indeed, high weights are indicators of expert opinions, the estimates obtained in this way are more accurate than the current ones.\nNevertheless, the validation of this underlying assumption requires further controlled experiments.\n137 4.\nTHE INFLUENCE OF PAST RATINGS Two important assumptions are generally made about reviews submitted to online forums.\nThe first is that ratings truthfully reflect the quality observed by the users; the second is that reviews are independent from one another.\nWhile anecdotal evidence [9, 22] challenges the first assumption3 , in this section, we address the second.\nA perusal of online reviews shows that reviews are often part of discussion threads, where users make an effort to contradict, or vehemently agree with the remarks of previous users.\nConsider, for example, the following review: I don``t understand the negative reviews... the hotel was a little dark, but that was the style.\nIt was very artsy.\nYes it was close to the freeway, but in my opinion the sound of an occasional loud car is better than hearing the ding ding of slot machines all night!\nThe staff on-hand is FABULOUS.\nThe waitresses are great (and *** does not deserve the bad review she got, she was 100% attentive to us!)\n, the bartenders are friendly and professional at the same time... Here, the user was disturbed by previous negative reports, addressed these concerns, and set about trying to correct them.\nNot surprisingly, his ratings were considerably higher than the average ratings up to this point.\nIt seems that TripAdvisor users regularly read the reports submitted by previous users before booking a hotel, or before writing a review.\nPast reviews create some prior expectation regarding the quality of service, and this expectation has an influence on the submitted review.\nWe believe this observation holds for most online forums.\nThe subjective perception of quality is directly proportional to how well the actual experience meets the prior expectation, a fact confirmed by an important line of econometric and marketing research [17, 18, 16, 21].\nThe correlation between the reviews has also been confirmed by recent research on the dynamics of online review forums [6].\n4.1 Prior Expectations We define the prior expectation of user i regarding the feature f, as the average of the previously available ratings on the feature f4 : ef (i) = j \u03b8} Rlow f = {ri f |ef (i) < \u03b8} These sets are specific for each (hotel, feature) pair, and in our experiments we took \u03b8 = 4.\nThis rather high value is close to the average rating across all features across all hotels, and is justified by the fact that our data set contains mostly high quality hotels.\nFor each city, we take all hotels and compute the average ratings in the sets Rhigh f and Rlow f (see Table 3).\nThe average rating amongst reviews following low prior expectations is significantly higher than the average rating following high expectations.\nAs further evidence, we consider all hotels for which the function eV (i) (the expectation for the feature Value) has a high value (greater than 4) for some i, and a low value (less than 4) for some other i. Intuitively, these are the hotels for which there is a minimal degree of variation in the timely sequence of reviews: i.e., the cumulative average of ratings was at some point high and afterwards became low, or vice-versa.\nSuch variations are observed for about half of all hotels in each city.\nFigure 3 plots the median (across considered hotels) rating, rV , when ef (i) is not more than x but greater than x \u2212 0.5.\n2.5 3 3.5 4 4.5 5 2.5 3 3.5 4 4.5 5 Medianofrating expectation Boston Sydney Vegas Figure 3: The ratings tend to decrease as the expectation increases.\n138 There are two ways to interpret the function ef (i): \u2022 The expected value for feature f obtained by user i before his experience with the service, acquired by reading reports submitted by past users.\nIn this case, an overly high value for ef (i) would drive the user to submit a negative report (or vice versa), stemming from the difference between the actual value of the service, and the inflated expectation of this value acquired before his experience.\n\u2022 The expected value of feature f for all subsequent visitors of the site, if user i were not to submit a report.\nIn this case, the motivation for a negative report following an overly high value of ef is different: user i seeks to correct the expectation of future visitors to the site.\nUnlike the interpretation above, this does not require the user to derive an a priori expectation for the value of f. Note that neither interpretation implies that the average up to report i is inversely related to the rating at report i.\nThere might exist a measure of influence exerted by past reports that pushes the user behind report i to submit ratings which to some extent conforms with past reports: a low value for ef (i) can influence user i to submit a low rating for feature f because, for example, he fears that submitting a high rating will make him out to be a person with low standards5 .\nThis, at first, appears to contradict Hypothesis 2.\nHowever, this conformity rating cannot continue indefinitely: once the set of reports project a sufficiently deflated estimate for vf , future reviewers with comparatively positive impressions will seek to correct this misconception.\n4.2 Impact of textual comments on quality expectation Further insight into the rating behavior of TripAdvisor users can be obtained by analyzing the relationship between the weights wf and the values ef (i).\nIn particular, we examine the following hypothesis: Hypothesis 3.\nWhen a large proportion of the text of a review discusses a certain feature, the difference between the rating for that feature and the average rating up to that point tends to be large.\nThe intuition behind this claim is that when the user is adamant about voicing his opinion regarding a certain feature, his opinion differs from the collective opinion of previous postings.\nThis relies on the characteristic of reputation systems as feedback forums where a user is interested in projecting his opinion, with particular strength if this opinion differs from what he perceives to be the general opinion.\nTo test Hypothesis 3 we measure the average absolute difference between the expectation ef (i) and the rating ri f when the weight wi f is high, respectively low.\nWeights are classified high or low by comparing them with certain cutoff values: wi f is low if smaller than 0.1, while wi f is high if greater than \u03b8f .\nDifferent cutoff values were used for different features: \u03b8R = 0.4, \u03b8S = 0.4, \u03b8C = 0.2, and \u03b8V = 0.7.\nCleanliness has a lower cutoff since it is a feature rarely discussed; Value has a high cutoff for the opposite reason.\nResults are presented in Table 4.\n5 The idea that negative reports can encourage further negative reporting has been suggested before [14] Table 4: Average of |ri f \u2212ef (i)| when weights are high (first value in the cell) and low (second value in the cell) with P-values for the difference in sq. brackets.\nCity R S C V 1.058 1.208 1.728 1.356 Boston 0.701 0.838 0.760 0.917 [0.022] [0.063] [0.000] [0.218] 1.048 1.351 1.218 1.318 Sydney 0.752 0.759 0.767 0.908 [0.179] [0.009] [0.165] [0.495] 1.184 1.378 1.472 1.642 Las Vegas 0.772 0.834 0.808 1.043 [0.071] [0.020] [0.006] [0.076] This demonstrates that when weights are unusually high, users tend to express an opinion that does not conform to the net average of previous ratings.\nAs we might expect, for a feature that rarely was a high weight in the discussion, (e.g., cleanliness) the difference is particularly large.\nEven though the difference in the feature Value is quite large for Sydney, the P-value is high.\nThis is because only few reviews discussed value heavily.\nThe reason could be cultural or because there was less of a reason to discuss this feature.\n4.3 Reporting Incentives Previous models suggest that users who are not highly opinionated will not choose to voice their opinions [12].\nIn this section, we extend this model to account for the influence of expectations.\nThe motivation for submitting feedback is not only due to extreme opinions, but also to the difference between the current reputation (i.e., the prior expectation of the user) and the actual experience.\nSuch a rating model produces ratings that most of the time deviate from the current average rating.\nThe ratings that confirm the prior expectation will rarely be submitted.\nWe test on our data set the proportion of ratings that attempt to correct the current estimate.\nWe define a deviant rating as one that deviates from the current expectation by at least some threshold \u03b8, i.e., |ri f \u2212 ef (i)| \u2265 \u03b8.\nFor each of the three considered cities, the following tables, show the proportion of deviant ratings for \u03b8 = 0.5 and \u03b8 = 1.\nTable 5: Proportion of deviant ratings with \u03b8 = 0.5 City O R S C V Boston 0.696 0.619 0.676 0.604 0.684 Sydney 0.645 0.615 0.672 0.614 0.675 Las Vegas 0.721 0.641 0.694 0.662 0.724 Table 6: Proportion of deviant ratings with \u03b8 = 1 City O R S C V Boston 0.420 0.397 0.429 0.317 0.446 Sydney 0.360 0.367 0.442 0.336 0.489 Las Vegas 0.510 0.421 0.483 0.390 0.472 The above results suggest that a large proportion of users (close to one half, even for the high threshold value \u03b8 = 1) deviate from the prior average.\nThis reinforces the idea that users are more likely to submit a report when they believe they have something distinctive to add to the current stream of opinions for some feature.\nSuch conclusions are in total agreement with prior evidence that the distribution of reports often follows bi-modal, U-shaped distributions.\n139 5.\nMODELLING THE BEHAVIOR OF RATERS To account for the observations described in the previous sections, we propose a model for the behavior of the users when submitting online reviews.\nFor a given hotel, we make the assumption that the quality experienced by the users is normally distributed around some value vf , which represents the objective quality offered by the hotel on the feature f.\nThe rating submitted by user i on feature f is: \u02c6ri f = \u03b4f vi f + (1 \u2212 \u03b4f ) \u00b7 sign vi f \u2212 ef (i) c + d(vi f , ef (i)|wi f ) (2) where: \u2022 vi f is the (unknown) quality actually experienced by the user.\nvi f is assumed normally distributed around some value vf ; \u2022 \u03b4f \u2208 [0, 1] can be seen as a measure of the bias when reporting feedback.\nHigh values reflect the fact that users rate objectively, without being influenced by prior expectations.\nThe value of \u03b4f may depend on various factors; we fix one value for each feature f; \u2022 c is a constant between 1 and 5; \u2022 wi f is the weight of feature f in the textual comment of review i, computed according to Eq.\n(1); \u2022 d(vi f , ef (i)|wi f ) is a distance function between the expectation and the observation of user i.\nThe distance function satisfies the following properties: - d(y, z|w) \u2265 0 for all y, z \u2208 [0, 5], w \u2208 [0, 1]; - |d(y, z|w)| < |d(z, x|w)| if |y \u2212 z| < |z \u2212 x|; - |d(y, z|w1)| < |d(y, z|w2)| if w1 < w2; - c + d(vf , ef (i)|wi f ) \u2208 [1, 5]; The second term of Eq.\n(2) encodes the bias of the rating.\nThe higher the distance between the true observation vi f and the function ef , the higher the bias.\n5.1 Model Validation We use the data set of TripAdvisor reviews to validate the behavior model presented above.\nWe split for convenience the rating values in three ranges: bad (B = {1, 2}), indifferent (I = {3, 4}), and good (G = {5}), and perform the following two tests: \u2022 First, we will use our model to predict the ratings that have extremal values.\nFor every hotel, we take the sequence of reports, and whenever we encounter a rating that is either good or bad (but not indifferent) we try to predict it using Eq.\n(2) \u2022 Second, instead of predicting the value of extremal ratings, we try to classify them as either good or bad.\nFor every hotel we take the sequence of reports, and for each report (regardless of it value) we classify it as being good or bad However, to perform these tests, we need to estimate the objective value, vf , that is the average of the true quality observations, vi f .\nThe algorithm we are using is based on the intuition that the amount of conformity rating is minimized.\nIn other words, the value vf should be such that as often as possible, bad ratings follow expectations above vf and good ratings follow expectations below vf .\nFormally, we define the sets: \u03931 = {i|ef (i) < vf and ri f \u2208 B}; \u03932 = {i|ef (i) > vf and ri f \u2208 G}; that correspond to irregularities where even though the expectation at point i is lower than the delivered value, the rating is poor, and vice versa.\nWe define vf as the value that minimize these union of the two sets: vf = arg min vf |\u03931 \u222a \u03932| (3) In Eq.\n(2) we replace vi f by the value vf computed in Eq.\n(3), and use the following distance function: d(vf , ef (i)|wi f ) = |vf \u2212 ef (i)| vf \u2212 ef (i) |vf 2 \u2212 ef (i)2 | \u00b7 (1 + 2wi f ); The constant c \u2208 I was set to min{max{ef (i), 3}}, 4}.\nThe values for \u03b4f were fixed at {0.7, 0.7, 0.8, 0.7, 0.6} for the features {Overall, Rooms, Service, Cleanliness, Value} respectively.\nThe weights are computed as described in Section 3.\nAs a first experiment, we take the sets of extremal ratings {ri f |ri f /\u2208 I} for each hotel and feature.\nFor every such rating, ri f , we try to estimate it by computing \u02c6ri f using Eq.\n(2).\nWe compare this estimator with the one obtained by simply averaging the ratings over all hotels and features: i.e., \u00afrf = j,r j f =0 rj f j,r j f =0 1 ; Table 7 presents the ratio between the root mean square error (RMSE) when using \u02c6ri f and \u00afrf to estimate the actual ratings.\nIn all cases the estimate produced by our model is better than the simple average.\nTable 7: Average of RMSE(\u02c6rf ) RMSE(\u00afrf ) City O R S C V Boston 0.987 0.849 0.879 0.776 0.913 Sydney 0.927 0.817 0.826 0.720 0.681 Las Vegas 0.952 0.870 0.881 0.947 0.904 As a second experiment, we try to distinguish the sets Bf = {i|ri f \u2208 B} and Gf = {i|ri f \u2208 G} of bad, respectively good ratings on the feature f. For example, we compute the set Bf using the following classifier (called \u03c3): ri f \u2208 Bf (\u03c3f (i) = 1) \u21d4 \u02c6ri f \u2264 4; Tables 8, 9 and 10 present the Precision(p), Recall(r) and s = 2pr p+r for classifier \u03c3, and compares it with a naive majority classifier, \u03c4, \u03c4f (i) = 1 \u21d4 |Bf | \u2265 |Gf |: We see that recall is always higher for \u03c3 and precision is usually slightly worse.\nFor the s metric \u03c3 tends to add a 140 Table 8: Precision(p), Recall(r), s= 2pr p+r while spotting poor ratings for Boston O R S C V p 0.678 0.670 0.573 0.545 0.610 \u03c3 r 0.626 0.659 0.619 0.612 0.694 s 0.651 0.665 0.595 0.577 0.609 p 0.684 0.706 0.647 0.611 0.633 \u03c4 r 0.597 0.541 0.410 0.383 0.562 s 0.638 0.613 0.502 0.471 0.595 Table 9: Precision(p), Recall(r), s= 2pr p+r while spotting poor ratings for Las Vegas O R S C V p 0.654 0.748 0.592 0.712 0.583 \u03c3 r 0.608 0.536 0.791 0.474 0.610 s 0.630 0.624 0.677 0.569 0.596 p 0.685 0.761 0.621 0.748 0.606 \u03c4 r 0.542 0.505 0.767 0.445 0.441 s 0.605 0.607 0.670 0.558 0.511 1-20% improvement over \u03c4, much higher in some cases for hotels in Sydney.\nThis is likely because Sydney reviews are more positive than those of the American cities and cases where the number of bad reviews exceeded the number of good ones are rare.\nReplacing the test algorithm with one that plays a 1 with probability equal to the proportion of bad reviews improves its results for this city, but it is still outperformed by around 80%.\n6.\nSUMMARY OF RESULTS AND CONCLUSION The goal of this paper is to explore the factors that drive a user to submit a particular rating, rather than the incentives that encouraged him to submit a report in the first place.\nFor that we use two additional sources of information besides the vector of numerical ratings: first we look at the textual comments that accompany the reviews, and second we consider the reports that have been previously submitted by other users.\nUsing simple natural language processing algorithms, we were able to establish a correlation between the weight of a certain feature in the textual comment accompanying the review, and the noise present in the numerical rating.\nSpecifically, it seems that users who discuss amply a certain feature are likely to agree on a common rating.\nThis observation allows the construction of feature-by-feature estimators of quality that have a lower variance, and are hopefully less noisy.\nNevertheless, further evidence is required to support the intuition that ratings corresponding to high weights are expert opinions that deserve to be given higher priority when computing estimates of quality.\nSecond, we emphasize the dependence of ratings on previous reports.\nPrevious reports create an expectation of quality which affects the subjective perception of the user.\nWe validate two facts about the hotel reviews we collected from TripAdvisor: First, the ratings following low expectations (where the expectation is computed as the average of the previous reports) are likely to be higher than the ratings Table 10: Precision(p), Recall(r), s= 2pr p+r while spotting poor ratings for Sydney O R S C V p 0.650 0.463 0.544 0.550 0.580 \u03c3 r 0.234 0.378 0.571 0.169 0.592 s 0.343 0.452 0.557 0.259 0.586 p 0.562 0.615 0.600 0.500 0.600 \u03c4 r 0.054 0.098 0.101 0.015 0.175 s 0.098 0.168 0.172 0.030 0.271 following high expectations.\nIntuitively, the perception of quality (and consequently the rating) depends on how well the actual experience of the user meets her expectation.\nSecond, we include evidence from the textual comments, and find that when users devote a large fraction of the text to discussing a certain feature, they are likely to motivate a divergent rating (i.e., a rating that does not conform to the prior expectation).\nIntuitively, this supports the hypothesis that review forums act as discussion groups where users are keen on presenting and motivating their own opinion.\nWe have captured the empirical evidence in a behavior model that predicts the ratings submitted by the users.\nThe final rating depends, as expected, on the true observation, and on the gap between the observation and the expectation.\nThe gap tends to have a bigger influence when an important fraction of the textual comment is dedicated to discussing a certain feature.\nThe proposed model was validated on the empirical data and provides better estimates of the ratings actually submitted.\nOne assumption that we make is about the existence of an objective quality value vf for the feature f.\nThis is rarely true, especially over large spans of time.\nOther explanations might account for the correlation of ratings with past reports.\nFor example, if ef (i) reflects the true value of f at a point in time, the difference in the ratings following high and low expectations can be explained by hotel revenue models that are maximized when the value is modified accordingly.\nHowever, the idea that variation in ratings is not primarily a function of variation in value turns out to be a useful one.\nOur approach to approximate this elusive ``objective value'' is by no means perfect, but conforms neatly to the idea behind the model.\nA natural direction for future work is to examine concrete applications of our results.\nSignificant improvements of quality estimates are likely to be obtained by incorporating all empirical evidence about rating behavior.\nExactly how different factors affect the decisions of the users is not clear.\nThe answer might depend on the particular application, context and culture.\n7.\nREFERENCES [1] A. Admati and P. Pfleiderer.\nNoisytalk.com: Broadcasting opinions in a noisy environment.\nWorking Paper 1670R, Stanford University, 2000.\n[2] P. B., L. Lee, and S. Vaithyanathan.\nThumbs up?\nsentiment classification using machine learning techniques.\nIn Proceedings of the EMNLP-02, the Conference on Empirical Methods in Natural Language Processing, 2002.\n[3] H. Cui, V. Mittal, and M. Datar.\nComparative 141 Experiments on Sentiment Classification for Online Product Reviews.\nIn Proceedings of AAAI, 2006.\n[4] K. Dave, S. Lawrence, and D. Pennock.\nMining the peanut gallery:opinion extraction and semantic classification of product reviews.\nIn Proceedings of the 12th International Conference on the World Wide Web (WWW03), 2003.\n[5] C. Dellarocas, N. Awad, and X. Zhang.\nExploring the Value of Online Product Ratings in Revenue Forecasting: The Case of Motion Pictures.\nWorking paper, 2006.\n[6] C. Forman, A. Ghose, and B. Wiesenfeld.\nA Multi-Level Examination of the Impact of Social Identities on Economic Transactions in Electronic Markets.\nAvailable at SSRN: http://ssrn.com/abstract=918978, July 2006.\n[7] A. Ghose, P. Ipeirotis, and A. Sundararajan.\nReputation Premiums in Electronic Peer-to-Peer Markets: Analyzing Textual Feedback and Network Structure.\nIn Third Workshop on Economics of Peer-to-Peer Systems, (P2PECON), 2005.\n[8] A. Ghose, P. Ipeirotis, and A. Sundararajan.\nThe Dimensions of Reputation in electronic Markets.\nWorking Paper CeDER-06-02, New York University, 2006.\n[9] A. Harmon.\nAmazon Glitch Unmasks War of Reviewers.\nThe New York Times, February 14, 2004.\n[10] D. Houser and J. Wooders.\nReputation in Auctions: Theory and Evidence from eBay.\nJournal of Economics and Management Strategy, 15:353-369, 2006.\n[11] M. Hu and B. Liu.\nMining and summarizing customer reviews.\nIn Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD04), 2004.\n[12] N. Hu, P. Pavlou, and J. Zhang.\nCan Online Reviews Reveal a Product``s True Quality?\nIn Proceedings of ACM Conference on Electronic Commerce (EC 06), 2006.\n[13] K. Kalyanam and S. McIntyre.\nReturn on reputation in online auction market.\nWorking Paper 02/03-10-WP, Leavey School of Business, Santa Clara University., 2001.\n[14] L. Khopkar and P. Resnick.\nSelf-Selection, Slipping, Salvaging, Slacking, and Stoning: the Impacts of Negative Feedback at eBay.\nIn Proceedings of ACM Conference on Electronic Commerce (EC 05), 2005.\n[15] M. Melnik and J. Alm.\nDoes a seller``s reputation matter?\nevidence from ebay auctions.\nJournal of Industrial Economics, 50(3):337-350, 2002.\n[16] R. Olshavsky and J. Miller.\nConsumer Expectations, Product Performance and Perceived Product Quality.\nJournal of Marketing Research, 9:19-21, February 1972.\n[17] A. Parasuraman, V. Zeithaml, and L. Berry.\nA Conceptual Model of Service Quality and Its Implications for Future Research.\nJournal of Marketing, 49:41-50, 1985.\n[18] A. Parasuraman, V. Zeithaml, and L. Berry.\nSERVQUAL: A Multiple-Item Scale for Measuring Consumer Perceptions of Service Quality.\nJournal of Retailing, 64:12-40, 1988.\n[19] P. Pavlou and A. Dimoka.\nThe Nature and Role of Feedback Text Comments in Online Marketplaces: Implications for Trust Building, Price Premiums, and Seller Differentiation.\nInformation Systems Research, 17(4):392-414, 2006.\n[20] A. Popescu and O. Etzioni.\nExtracting product features and opinions from reviews.\nIn Proceedings of the Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, 2005.\n[21] R. Teas.\nExpectations, Performance Evaluation, and Consumers'' Perceptions of Quality.\nJournal of Marketing, 57:18-34, 1993.\n[22] E. White.\nChatting a Singer Up the Pop Charts.\nThe Wall Street Journal, October 15, 1999.\nAPPENDIX A. LIST OF WORDS, LR, ASSOCIATED TO THE FEATURE ROOMS All words serve as prefixes: room, space, interior, decor, ambiance, atmosphere, comfort, bath, toilet, bed, building, wall, window, private, temperature, sheet, linen, pillow, hot, water, cold, water, shower, lobby, furniture, carpet, air, condition, mattress, layout, design, mirror, ceiling, lighting, lamp, sofa, chair, dresser, wardrobe, closet 142", "lvl-3": "Understanding User Behavior in Online Feedback Reporting\nABSTRACT\nOnline reviews have become increasingly popular as a way to judge the quality of various products and services .\nPrevious work has demonstrated that contradictory reporting and underlying user biases make judging the true worth of a service difficult .\nIn this paper , we investigate underlying factors that influence user behavior when reporting feedback .\nWe look at two sources of information besides numerical ratings : linguistic evidence from the textual comment accompanying a review , and patterns in the time sequence of reports .\nWe first show that groups of users who amply discuss a certain feature are more likely to agree on a common rating for that feature .\nSecond , we show that a user 's rating partly reflects the difference between true quality and prior expectation of quality as inferred from previous reviews .\nBoth give us a less noisy way to produce rating estimates and reveal the reasons behind user bias .\nOur hypotheses were validated by statistical evidence from hotel reviews on the TripAdvisor website .\n1 .\nMOTIVATIONS\nThe spread of the internet has made it possible for online feedback forums ( or reputation mechanisms ) to become an important channel for Word-of-mouth regarding products , services or other types of commercial interactions .\nNumerous empirical studies [ 10 , 15 , 13 , 5 ] show that buyers se\nriously consider online feedback when making purchasing decisions , and are willing to pay reputation premiums for products or services that have a good reputation .\nRecent analysis , however , raises important questions regarding the ability of existing forums to reflect the real quality of a product .\nIn the absence of clear incentives , users with a moderate outlook will not bother to voice their opinions , which leads to an unrepresentative sample of reviews .\nFor example , [ 12 , 1 ] show that Amazons ratings of books or CDs follow with great probability bi-modal , U-shaped distributions where most of the ratings are either very good , or very bad .\nControlled experiments , on the other hand , reveal opinions on the same items that are normally distributed .\nUnder these circumstances , using the arithmetic mean to predict quality ( as most forums actually do ) gives the typical user an estimator with high variance that is often false .\nImproving the way we aggregate the information available from online reviews requires a deep understanding of the underlying factors that bias the rating behavior of users .\nHu et al. [ 12 ] propose the `` Brag-and-Moan Model '' where users rate only if their utility of the product ( drawn from a normal distribution ) falls outside a median interval .\nThe authors conclude that the model explains the empirical distribution of reports , and offers insights into smarter ways of estimating the true quality of the product .\nIn the present paper we extend this line of research , and attempt to explain further facts about the behavior of users when reporting online feedback .\nUsing actual hotel reviews from the TripAdvisor2 website , we consider two additional sources of information besides the basic numerical ratings submitted by users .\nThe first is simple linguistic evidence from the textual review that usually accompanies the numerical ratings .\nWe use text-mining techniques similar to [ 7 ] and [ 3 ] , however , we are only interested in identifying what aspects of the service the user is discussing , without computing the semantic orientation of the text .\nWe find that users who comment more on the same feature are more likely to agree on a common numerical rating for that particular feature .\nIntuitively , lengthy comments reveal the importance of the feature to the user .\nSince people tend to be more knowledgeable in the aspects they consider important , users who discuss a given feature in more details might be assumed to have more authority in evaluating that feature .\nSecond we investigate the relationship between a review\nFigure 1 : The TripAdvisor page displaying reviews\nfor a popular Boston hotel .\nName of hotel and advertisements were deliberatively erased .\nand the reviews that preceded it .\nA perusal of online reviews shows that ratings are often part of discussion threads , where one post is not necessarily independent of other posts .\nOne may see , for example , users who make an effort to contradict , or vehemently agree with , the remarks of previous users .\nBy analyzing the time sequence of reports , we conclude that past reviews influence the future reports , as they create some prior expectation regarding the quality of service .\nThe subjective perception of the user is influenced by the gap between the prior expectation and the actual performance of the service [ 17 , 18 , 16 , 21 ] which will later reflect in the user 's rating .\nWe propose a model that captures the dependence of ratings on prior expectations , and validate it using the empirical data we collected .\nBoth results can be used to improve the way reputation mechanisms aggregate the information from individual reviews .\nOur first result can be used to determine a featureby-feature estimate of quality , where for each feature , a different subset of reviews ( i.e. , those with lengthy comments of that feature ) is considered .\nThe second leads to an algorithm that outputs a more precise estimate of the real quality .\n2 .\nTHE DATA SET\n2.1 Formal notation\n3 .\nEVIDENCE FROM TEXTUAL COMMENTS\n4 .\nTHE INFLUENCE OF PAST RATINGS\n4.1 Prior Expectations\n4.2 Impact of textual comments on quality expectation\n4.3 Reporting Incentives\n5 .\nMODELLING THE BEHAVIOR OF RATERS\n5.1 Model Validation\n6 .\nSUMMARY OF RESULTS AND CONCLUSION\n7 .\nREFERENCES\nAPPENDIX A. LIST OF WORDS , LR , ASSOCIATED TO THE FEATURE ROOMS", "lvl-4": "Understanding User Behavior in Online Feedback Reporting\nABSTRACT\nOnline reviews have become increasingly popular as a way to judge the quality of various products and services .\nPrevious work has demonstrated that contradictory reporting and underlying user biases make judging the true worth of a service difficult .\nIn this paper , we investigate underlying factors that influence user behavior when reporting feedback .\nWe look at two sources of information besides numerical ratings : linguistic evidence from the textual comment accompanying a review , and patterns in the time sequence of reports .\nWe first show that groups of users who amply discuss a certain feature are more likely to agree on a common rating for that feature .\nSecond , we show that a user 's rating partly reflects the difference between true quality and prior expectation of quality as inferred from previous reviews .\nBoth give us a less noisy way to produce rating estimates and reveal the reasons behind user bias .\nOur hypotheses were validated by statistical evidence from hotel reviews on the TripAdvisor website .\n1 .\nMOTIVATIONS\nriously consider online feedback when making purchasing decisions , and are willing to pay reputation premiums for products or services that have a good reputation .\nRecent analysis , however , raises important questions regarding the ability of existing forums to reflect the real quality of a product .\nIn the absence of clear incentives , users with a moderate outlook will not bother to voice their opinions , which leads to an unrepresentative sample of reviews .\nUnder these circumstances , using the arithmetic mean to predict quality ( as most forums actually do ) gives the typical user an estimator with high variance that is often false .\nImproving the way we aggregate the information available from online reviews requires a deep understanding of the underlying factors that bias the rating behavior of users .\nHu et al. [ 12 ] propose the `` Brag-and-Moan Model '' where users rate only if their utility of the product ( drawn from a normal distribution ) falls outside a median interval .\nThe authors conclude that the model explains the empirical distribution of reports , and offers insights into smarter ways of estimating the true quality of the product .\nIn the present paper we extend this line of research , and attempt to explain further facts about the behavior of users when reporting online feedback .\nUsing actual hotel reviews from the TripAdvisor2 website , we consider two additional sources of information besides the basic numerical ratings submitted by users .\nThe first is simple linguistic evidence from the textual review that usually accompanies the numerical ratings .\nWe find that users who comment more on the same feature are more likely to agree on a common numerical rating for that particular feature .\nIntuitively , lengthy comments reveal the importance of the feature to the user .\nSince people tend to be more knowledgeable in the aspects they consider important , users who discuss a given feature in more details might be assumed to have more authority in evaluating that feature .\nSecond we investigate the relationship between a review\nFigure 1 : The TripAdvisor page displaying reviews\nfor a popular Boston hotel .\nName of hotel and advertisements were deliberatively erased .\nand the reviews that preceded it .\nA perusal of online reviews shows that ratings are often part of discussion threads , where one post is not necessarily independent of other posts .\nOne may see , for example , users who make an effort to contradict , or vehemently agree with , the remarks of previous users .\nBy analyzing the time sequence of reports , we conclude that past reviews influence the future reports , as they create some prior expectation regarding the quality of service .\nThe subjective perception of the user is influenced by the gap between the prior expectation and the actual performance of the service [ 17 , 18 , 16 , 21 ] which will later reflect in the user 's rating .\nWe propose a model that captures the dependence of ratings on prior expectations , and validate it using the empirical data we collected .\nBoth results can be used to improve the way reputation mechanisms aggregate the information from individual reviews .\nOur first result can be used to determine a featureby-feature estimate of quality , where for each feature , a different subset of reviews ( i.e. , those with lengthy comments of that feature ) is considered .\nThe second leads to an algorithm that outputs a more precise estimate of the real quality .", "lvl-2": "Understanding User Behavior in Online Feedback Reporting\nABSTRACT\nOnline reviews have become increasingly popular as a way to judge the quality of various products and services .\nPrevious work has demonstrated that contradictory reporting and underlying user biases make judging the true worth of a service difficult .\nIn this paper , we investigate underlying factors that influence user behavior when reporting feedback .\nWe look at two sources of information besides numerical ratings : linguistic evidence from the textual comment accompanying a review , and patterns in the time sequence of reports .\nWe first show that groups of users who amply discuss a certain feature are more likely to agree on a common rating for that feature .\nSecond , we show that a user 's rating partly reflects the difference between true quality and prior expectation of quality as inferred from previous reviews .\nBoth give us a less noisy way to produce rating estimates and reveal the reasons behind user bias .\nOur hypotheses were validated by statistical evidence from hotel reviews on the TripAdvisor website .\n1 .\nMOTIVATIONS\nThe spread of the internet has made it possible for online feedback forums ( or reputation mechanisms ) to become an important channel for Word-of-mouth regarding products , services or other types of commercial interactions .\nNumerous empirical studies [ 10 , 15 , 13 , 5 ] show that buyers se\nriously consider online feedback when making purchasing decisions , and are willing to pay reputation premiums for products or services that have a good reputation .\nRecent analysis , however , raises important questions regarding the ability of existing forums to reflect the real quality of a product .\nIn the absence of clear incentives , users with a moderate outlook will not bother to voice their opinions , which leads to an unrepresentative sample of reviews .\nFor example , [ 12 , 1 ] show that Amazons ratings of books or CDs follow with great probability bi-modal , U-shaped distributions where most of the ratings are either very good , or very bad .\nControlled experiments , on the other hand , reveal opinions on the same items that are normally distributed .\nUnder these circumstances , using the arithmetic mean to predict quality ( as most forums actually do ) gives the typical user an estimator with high variance that is often false .\nImproving the way we aggregate the information available from online reviews requires a deep understanding of the underlying factors that bias the rating behavior of users .\nHu et al. [ 12 ] propose the `` Brag-and-Moan Model '' where users rate only if their utility of the product ( drawn from a normal distribution ) falls outside a median interval .\nThe authors conclude that the model explains the empirical distribution of reports , and offers insights into smarter ways of estimating the true quality of the product .\nIn the present paper we extend this line of research , and attempt to explain further facts about the behavior of users when reporting online feedback .\nUsing actual hotel reviews from the TripAdvisor2 website , we consider two additional sources of information besides the basic numerical ratings submitted by users .\nThe first is simple linguistic evidence from the textual review that usually accompanies the numerical ratings .\nWe use text-mining techniques similar to [ 7 ] and [ 3 ] , however , we are only interested in identifying what aspects of the service the user is discussing , without computing the semantic orientation of the text .\nWe find that users who comment more on the same feature are more likely to agree on a common numerical rating for that particular feature .\nIntuitively , lengthy comments reveal the importance of the feature to the user .\nSince people tend to be more knowledgeable in the aspects they consider important , users who discuss a given feature in more details might be assumed to have more authority in evaluating that feature .\nSecond we investigate the relationship between a review\nFigure 1 : The TripAdvisor page displaying reviews\nfor a popular Boston hotel .\nName of hotel and advertisements were deliberatively erased .\nand the reviews that preceded it .\nA perusal of online reviews shows that ratings are often part of discussion threads , where one post is not necessarily independent of other posts .\nOne may see , for example , users who make an effort to contradict , or vehemently agree with , the remarks of previous users .\nBy analyzing the time sequence of reports , we conclude that past reviews influence the future reports , as they create some prior expectation regarding the quality of service .\nThe subjective perception of the user is influenced by the gap between the prior expectation and the actual performance of the service [ 17 , 18 , 16 , 21 ] which will later reflect in the user 's rating .\nWe propose a model that captures the dependence of ratings on prior expectations , and validate it using the empirical data we collected .\nBoth results can be used to improve the way reputation mechanisms aggregate the information from individual reviews .\nOur first result can be used to determine a featureby-feature estimate of quality , where for each feature , a different subset of reviews ( i.e. , those with lengthy comments of that feature ) is considered .\nThe second leads to an algorithm that outputs a more precise estimate of the real quality .\n2 .\nTHE DATA SET\nWe use in this paper real hotel reviews collected from the popular travel site TripAdvisor .\nTripAdvisor indexes hotels from cities across the world , along with reviews written by travelers .\nUsers can search the site by giving the hotel 's name and location ( optional ) .\nThe reviews for a given hotel are displayed as a list ( ordered from the most recent to the oldest ) , with 5 reviews per page .\nThe reviews contain :\n\u2022 information about the author of the review ( e.g. , dates of stay , username of the reviewer , location of the reviewer ) ; \u2022 the overall rating ( from 1 , lowest , to 5 , highest ) ; \u2022 a textual review containing a title for the review , free comments , and the main things the reviewer liked and disliked ; \u2022 numerical ratings ( from 1 , lowest , to 5 , highest ) for different features ( e.g. , cleanliness , service , location , etc. . )\nBelow the name of the hotel , TripAdvisor displays the address of the hotel , general information ( number of rooms , number of stars , short description , etc ) , the average overall rating , the TripAdvisor ranking , and an average rating for each feature .\nFigure 1 shows the page for a popular Boston hotel whose name ( along with advertisements ) was explicitly erased .\nWe selected three cities for this study : Boston , Sydney and Las Vegas .\nFor each city we considered all hotels that had at least 10 reviews , and recorded all reviews .\nTable 1 presents the number of hotels considered in each city , the total number of reviews recorded for each city , and the distribution of hotels with respect to the star-rating ( as available on the TripAdvisor site ) .\nNote that not all hotels have a star-rating .\nTable 1 : A summary of the data set .\nFor each review we recorded the overall rating , the textual review ( title and body of the review ) and the numerical rating on 7 features : Rooms ( R ) , Service ( S ) , Cleanliness ( C ) , Value ( V ) , Food ( F ) , Location ( L ) and Noise ( N ) .\nTripAdvisor does not require users to submit anything other than the overall rating , hence a typical review rates few additional features , regardless of the discussion in the textual comment .\nOnly the features Rooms ( R ) , Service ( S ) , Cleanliness ( C ) and Value ( V ) are rated by a significant number of users .\nHowever , we also selected the features Food ( F ) , Location ( L ) and Noise ( N ) because they are referred to in a significant number of textual comments .\nFor each feature we record the numerical rating given by the user , or 0 when the rating is missing .\nThe typical length of the textual comment amounts to approximately 200 words .\nAll data was collected by crawling the TripAdvisor site in September 2006 .\n2.1 Formal notation\nWe will formally refer to a review by a tuple ( r , T ) where :\n\u2022 r = ( rf ) is a vector containing the ratings\nrf \u2208 { 0 , 1 , ... 5 } for the features f \u2208 F = { O , R , S , C , V , F , L , N } ; note that the overall rating , rO , is abusively recorded as the rating for the feature Overall ( O ) ;\n\u2022 T is the textual comment that accompanies the review .\nReviews are indexed according to the variable i , such that ( ri , Ti ) is the ith review in our database .\nSince we do n't record the username of the reviewer , we will also say that the ith review in our data set was submitted by user i .\nWhen we need to consider only the reviews of a given hotel , h , we will use ( ri ( h ) , Ti ( h ) ) to denote the ith review about the hotel h.\n3 .\nEVIDENCE FROM TEXTUAL COMMENTS\nThe free textual comments associated to online reviews are a valuable source of information for understanding the reasons behind the numerical ratings left by the reviewers .\nThe text may , for example , reveal concrete examples of aspects that the user liked or disliked , thus justifying some of the high , respectively low ratings for certain features .\nThe text may also offer guidelines for understanding the preferences of the reviewer , and the weights of different features when computing an overall rating .\nThe problem , however , is that free textual comments are difficult to read .\nUsers are required to scroll through many reviews and read mostly repetitive information .\nSignificant improvements would be obtained if the reviews were automatically interpreted and aggregated .\nUnfortunately , this seems a difficult task for computers since human users often use witty language , abbreviations , cultural specific phrases , and the figurative style .\nNevertheless , several important results use the textual comments of online reviews in an automated way .\nUsing well established natural language techniques , reviews or parts of reviews can be classified as having a positive or negative semantic orientation .\nPang et al. [ 2 ] classify movie reviews into positive/negative by training three different classifiers ( Naive Bayes , Maximum Entropy and SVM ) using classification features based on unigrams , bigrams or part-of-speech tags .\nDave et al. [ 4 ] analyze reviews from CNet and Amazon , and surprisingly show that classification features based on unigrams or bigrams perform better than higher-order n-grams .\nThis result is challenged by Cui et al. [ 3 ] who look at large collections of reviews crawled from the web .\nThey show that the size of the data set is important , and that bigger training sets allow classifiers to successfully use more complex classification features based on n-grams .\nHu and Liu [ 11 ] also crawl the web for product reviews and automatically identify product attributes that have been discussed by reviewers .\nThey use Wordnet to compute the semantic orientation of product evaluations and summarize user reviews by extracting positive and negative evaluations of different product features .\nPopescu and Etzioni [ 20 ] analyze a similar setting , but use search engine hit-counts to identify product attributes ; the semantic orientation is assigned through the relaxation labeling technique .\nGhose et al. [ 7 , 8 ] analyze seller reviews from the Amazon secondary market to identify the different dimensions ( e.g. , delivery , packaging , customer support , etc. ) of reputation .\nThey parse the text , and tag the part-of-speech for each word .\nFrequent nouns , noun phrases and verbal phrases are identified as dimensions of reputation , while the corresponding modifiers ( i.e. , adjectives and adverbs ) are used to derive numerical scores for each dimension .\nThe enhanced reputation measure correlates better with the pricing information observed in the market .\nPavlou and Dimoka [ 19 ] analyze eBay reviews and find that textual comments have an important impact on reputation premiums .\nOur approach is similar to the previously mentioned works , in the sense that we identify the aspects ( i.e. , hotel features ) discussed by the users in the textual reviews .\nHowever , we do not compute the semantic orientation of the text , nor attempt to infer missing ratings .\nWe define the weight , wif , of feature f E F in the text Ti associated with the review ( ri , Ti ) , as the fraction of Ti dedicated to discussing aspects ( both positive and negative ) related to feature f .\nWe propose an elementary method to approximate the values of these weights .\nFor each feature we manually construct the word list Lf containing approximately 50 words that are most commonly associated to the feature f .\nThe initial words were selected from reading some of the reviews , and seeing what words coincide with discussion of which features .\nThe list was then extended by adding all thesaurus entries that were related to the initial words .\nFinally , we brainstormed for missing words that would normally be associated with each of the features .\nLet Lf nTi be the list of terms common to both Lf and Ti .\nEach term of Lf is counted the number of times it appears in T i , with two exceptions :\n\u2022 in cases where the user submits a title to the review , we account for the title text by appending it three times to the review text T i .\nThe intuitive assumption is that the user 's opinion is more strongly reflected in the title , rather than in the body of the review .\nFor example , many reviews are accurately summarized by titles such as '' Excellent service , terrible location '' or '' Bad value for money '' ; \u2022 certain words that occur only once in the text are counted multiple times if their relevance to that fea\nture is particularly strong .\nThese were ' root ' words for each feature ( e.g. , 's taff ' is a root word for the feature Service ) , and were weighted either 2 or 3 .\nEach feature was assigned up to 3 such root words , so almost all words are counted only once .\nThe list of words for the feature Rooms is given for reference in Appendix A .\nThe weight wif is computed as :\nf \u2208 , ILf n TiI where JLf nTiJ is the number of terms common to Lf and T i .\nThe weight for the feature Overall was set to min { | T i | 5000 , 1 } where JTiJ is the number of character in T i .\nThe following is a TripAdvisor review for a Boston hotel ( the name of the hotel is omitted ) : '' I 'll start by saying that\nThis numerical ratings associated to this review are rO = 3 , rR = 3 , rS = 3 , rC = 4 , rV = 2 for features Overall ( O ) , Rooms ( R ) , Service ( S ) , Cleanliness ( C ) and Value ( V ) respectively .\nThe ratings for the features Food ( F ) , Location ( L ) and Noise ( N ) are absent ( i.e. , rF = rL = rN = 0 ) .\nThe weights wf are computed from the following lists of common terms :\nThe root words 'S taff ' and ' Center ' were tripled and doubled respectively .\nThe overall weight of the textual review is wO = 0.197 .\nThese values account reasonably well for the weights of different features in the discussion of the reviewer .\nOne point to note is that some terms in the lists Lf possess an inherent semantic orientation .\nFor example the word ' grime ' ( belonging to the list LC ) would be used most often to assert the presence , and not the absence of grime .\nThis is unavoidable , but care was taken to ensure words from both sides of the spectrum were used .\nFor this reason , some lists such as LR contain only nouns of objects that one would typically describe in a room ( see Appendix A ) .\nThe goal of this section is to analyse the influence of the weights wif on the numerical ratings rif .\nIntuitively , users who spent a lot of their time discussing a feature f ( i.e. , wif is high ) had something to say about their experience with regard to this feature .\nObviously , feature f is important for user i .\nSince people tend to be more knowledgeable in the aspects they consider important , our hypothesis is that the ratings rif ( corresponding to high weights wif ) constitute a subset of `` expert '' ratings for feature f. Figure 2 plots the distribution of the rates ri ( h ) C with respect to the weights wi ( h ) C for the cleanliness of a Las Vegas hotel , h. Here , the high ratings are restricted to the reviews that discuss little the cleanliness .\nWhenever cleanliness appears in the discussion , the ratings are low .\nMany hotels exhibit similar rating patterns for various features .\nRatings corresponding to low weights span the whole spectrum from 1 to 5 , while the ratings corresponding to high weights are more grouped together ( either around good or bad ratings ) .\nWe therefore make the following hypothesis : HYPOTHESIS 1 .\nThe ratings rif corresponding to the reviews where wif is high , are more similar to each other than to the overall collection of ratings .\nTo test the hypothesis , we take the entire set of reviews , and feature by feature , we compute the standard deviation of the ratings with high weights , and the standard deviation of the entire set of ratings .\nHigh weights were defined as those belonging to the upper 20 % of the weight range for the corresponding feature .\nIf Hypothesis 1 were true , the standard deviation of all ratings should be higher than the standard deviation of the ratings with high weights .\nFigure 2 : The distribution of ratings against the weight of the cleanliness feature .\nWe use a standard T-test to measure the significance of the results .\nCity by city and feature by feature , Table 2 presents the average standard deviation of all ratings , and the average standard deviation of ratings with high weights .\nIndeed , the ratings with high weights have lower standard deviation , and the results are significant at the standard 0.05 significance threshold ( although for certain cities taken independently there does n't seem to be a significant difference , the results are significant for the entire data set ) .\nPlease note that only the features O , R , S , C and V were considered , since for the others ( F , L , and N ) we did n't have enough ratings .\nTable 2 : Average standard deviation for all ratings , and average standard deviation for ratings with high weights .\nIn square brackets , the corresponding p-values for a positive difference between the two .\nHypothesis 1 not only provides some basic understanding regarding the rating behavior of online users , it also suggests some ways of computing better quality estimates .\nWe can , for example , construct a feature-by-feature quality estimate with much lower variance : for each feature we take the subset of reviews that amply discuss that feature , and output as a quality estimate the average rating for this subset .\nInitial experiments suggest that the average feature-by-feature ratings computed in this way are different from the average ratings computed on the whole data set .\nGiven that , indeed , high weights are indicators of `` expert '' opinions , the estimates obtained in this way are more accurate than the current ones .\nNevertheless , the validation of this underlying assumption requires further controlled experiments .\n4 .\nTHE INFLUENCE OF PAST RATINGS\nTwo important assumptions are generally made about reviews submitted to online forums .\nThe first is that ratings truthfully reflect the quality observed by the users ; the second is that reviews are independent from one another .\nWhile anecdotal evidence [ 9 , 22 ] challenges the first assumption3 , in this section , we address the second .\nA perusal of online reviews shows that reviews are often part of discussion threads , where users make an effort to contradict , or vehemently agree with the remarks of previous users .\nConsider , for example , the following review : '' I do n't understand the negative reviews ... the hotel was a little dark , but that was the style .\nIt was very artsy .\nYes it was close to the freeway , but in my opinion the sound of an occasional loud car is better than hearing the '' ding ding '' of slot machines all night !\nThe staff on-hand is FABULOUS .\nThe waitresses are great ( and *** does not deserve the bad review she got , she was 100 % attentive to us ! )\n, the bartenders are friendly and professional at the same time ... '' Here , the user was disturbed by previous negative reports , addressed these concerns , and set about trying to correct them .\nNot surprisingly , his ratings were considerably higher than the average ratings up to this point .\nIt seems that TripAdvisor users regularly read the reports submitted by previous users before booking a hotel , or before writing a review .\nPast reviews create some prior expectation regarding the quality of service , and this expectation has an influence on the submitted review .\nWe believe this observation holds for most online forums .\nThe subjective perception of quality is directly proportional to how well the actual experience meets the prior expectation , a fact confirmed by an important line of econometric and marketing research [ 17 , 18 , 16 , 21 ] .\nThe correlation between the reviews has also been confirmed by recent research on the dynamics of online review forums [ 6 ] .\n4.1 Prior Expectations\nWe define the prior expectation of user i regarding the feature f , as the average of the previously available ratings on the feature f4 :\nAs a first hypothesis , we assert that the rating rif is a function of the prior expectation ef ( i ) :\nWe define high and low expectations as those that are above , respectively below a certain cutoff value 0 .\nThe set of reviews preceded by high , respectively low expectations\nTable 3 : Average ratings for reviews preceded by low ( first value in the cell ) and high ( second value in the cell ) expectations .\nThe P-values for a positive difference are given square brackets .\nare defined as follows :\nThese sets are specific for each ( hotel , feature ) pair , and in our experiments we took 0 = 4 .\nThis rather high value is close to the average rating across all features across all hotels , and is justified by the fact that our data set contains mostly high quality hotels .\nFor each city , we take all hotels and compute the average ratings in the sets Rhighf and Rlow f ( see Table 3 ) .\nThe average rating amongst reviews following low prior expectations is significantly higher than the average rating following high expectations .\nAs further evidence , we consider all hotels for which the function eV ( i ) ( the expectation for the feature Value ) has a high value ( greater than 4 ) for some i , and a low value ( less than 4 ) for some other i. Intuitively , these are the hotels for which there is a minimal degree of variation in the timely sequence of reviews : i.e. , the cumulative average of ratings was at some point high and afterwards became low , or vice-versa .\nSuch variations are observed for about half of all hotels in each city .\nFigure 3 plots the median ( across considered hotels ) rating , rV , when ef ( i ) is not more than x but greater than x \u2212 0.5 .\nFigure 3 : The ratings tend to decrease as the expectation increases .\nThere are two ways to interpret the function ef ( i ) : \u2022 The expected value for feature f obtained by user i before his experience with the service , acquired by reading reports submitted by past users .\nIn this case , an overly high value for ef ( i ) would drive the user to submit a negative report ( or vice versa ) , stemming from the difference between the actual value of the service , and the inflated expectation of this value acquired before his experience .\n\u2022 The expected value of feature f for all subsequent visitors of the site , if user i were not to submit a report .\nIn this case , the motivation for a negative report following an overly high value of ef is different : user i seeks to correct the expectation of future visitors to the site .\nUnlike the interpretation above , this does not require the user to derive an a priori expectation for the value of f. Note that neither interpretation implies that the average up to report i is inversely related to the rating at report i .\nThere might exist a measure of influence exerted by past reports that pushes the user behind report i to submit ratings which to some extent conforms with past reports : a low value for ef ( i ) can influence user i to submit a low rating for feature f because , for example , he fears that submitting a high rating will make him out to be a person with low standards5 .\nThis , at first , appears to contradict Hypothesis 2 .\nHowever , this conformity rating can not continue indefinitely : once the set of reports project a sufficiently deflated estimate for vf , future reviewers with comparatively positive impressions will seek to correct this misconception .\n4.2 Impact of textual comments on quality expectation\nFurther insight into the rating behavior of TripAdvisor users can be obtained by analyzing the relationship between the weights wf and the values ef ( i ) .\nIn particular , we examine the following hypothesis : HYPOTHESIS 3 .\nWhen a large proportion of the text of a review discusses a certain feature , the difference between the rating for that feature and the average rating up to that point tends to be large .\nThe intuition behind this claim is that when the user is adamant about voicing his opinion regarding a certain feature , his opinion differs from the collective opinion of previous postings .\nThis relies on the characteristic of reputation systems as feedback forums where a user is interested in projecting his opinion , with particular strength if this opinion differs from what he perceives to be the general opinion .\nTo test Hypothesis 3 we measure the average absolute difference between the expectation ef ( i ) and the rating rif when the weight wif is high , respectively low .\nWeights are classified high or low by comparing them with certain cutoff values : wif is low if smaller than 0.1 , while wif is high if greater than 0f .\nDifferent cutoff values were used for different features : 0R = 0.4 , 0S = 0.4 , 0C = 0.2 , and 0V = 0.7 .\nCleanliness has a lower cutoff since it is a feature rarely discussed ; Value has a high cutoff for the opposite reason .\nResults are presented in Table 4 .\nTable 4 : Average of | rif \u2212 ef ( i ) | when weights are high\nThis demonstrates that when weights are unusually high , users tend to express an opinion that does not conform to the net average of previous ratings .\nAs we might expect , for a feature that rarely was a high weight in the discussion , ( e.g. , cleanliness ) the difference is particularly large .\nEven though the difference in the feature Value is quite large for Sydney , the P-value is high .\nThis is because only few reviews discussed value heavily .\nThe reason could be cultural or because there was less of a reason to discuss this feature .\n4.3 Reporting Incentives\nPrevious models suggest that users who are not highly opinionated will not choose to voice their opinions [ 12 ] .\nIn this section , we extend this model to account for the influence of expectations .\nThe motivation for submitting feedback is not only due to extreme opinions , but also to the difference between the current reputation ( i.e. , the prior expectation of the user ) and the actual experience .\nSuch a rating model produces ratings that most of the time deviate from the current average rating .\nThe ratings that confirm the prior expectation will rarely be submitted .\nWe test on our data set the proportion of ratings that attempt to `` correct '' the current estimate .\nWe define a deviant rating as one that deviates from the current expectation by at least some threshold 0 , i.e. , | rif \u2212 ef ( i ) | \u2265 0 .\nFor each of the three considered cities , the following tables , show the proportion of deviant ratings for 0 = 0.5 and 0 = 1 .\nTable 5 : Proportion of deviant ratings with 0 = 0.5\nTable 6 : Proportion of deviant ratings with 0 = 1\nThe above results suggest that a large proportion of users ( close to one half , even for the high threshold value 0 = 1 ) deviate from the prior average .\nThis reinforces the idea that users are more likely to submit a report when they believe they have something distinctive to add to the current stream of opinions for some feature .\nSuch conclusions are in total agreement with prior evidence that the distribution of reports often follows bi-modal , U-shaped distributions .\n5 .\nMODELLING THE BEHAVIOR OF RATERS\nTo account for the observations described in the previous sections , we propose a model for the behavior of the users when submitting online reviews .\nFor a given hotel , we make the assumption that the quality experienced by the users is normally distributed around some value vf , which represents the `` objective '' quality offered by the hotel on the feature f .\nThe rating submitted by user i on feature f is :\nwhere :\n\u2022 vif is the ( unknown ) quality actually experienced by the user .\nvif is assumed normally distributed around some value vf ; \u2022 \u03b4f \u2208 [ 0 , 1 ] can be seen as a measure of the bias when reporting feedback .\nHigh values reflect the fact that users rate objectively , without being influenced by prior expectations .\nThe value of \u03b4f may depend on various factors ; we fix one value for each feature f ; \u2022 c is a constant between 1 and 5 ; \u2022 wif is the weight of feature f in the textual comment of review i , computed according to Eq .\n( 1 ) ; \u2022 d ( vif , ef ( i ) | wif ) is a distance function between the expectation and the observation of user i .\nThe distance function satisfies the following properties : -- d ( y , z | w ) \u2265 0 for all y , z \u2208 [ 0 , 5 ] , w \u2208 [ 0 , 1 ] ; -- | d ( y , z | w ) | < | d ( z , x | w ) | if | y \u2212 z | < | z \u2212 x | ; -- | d ( y , z | w1 ) | < | d ( y , z | w2 ) | if w1 < w2 ; -- c + d ( vf , ef ( i ) | wif ) \u2208 [ 1 , 5 ] ;\nThe second term of Eq .\n( 2 ) encodes the bias of the rating .\nThe higher the distance between the true observation vif and the function ef , the higher the bias .\n5.1 Model Validation\nWe use the data set of TripAdvisor reviews to validate the behavior model presented above .\nWe split for convenience the rating values in three ranges : bad ( B = { 1 , 2 } ) , indifferent ( I = { 3 , 4 } ) , and good ( G = { 5 } ) , and perform the following two tests :\n\u2022 First , we will use our model to predict the ratings that have extremal values .\nFor every hotel , we take the sequence of reports , and whenever we encounter a rating that is either good or bad ( but not indifferent ) we try to predict it using Eq .\n( 2 ) \u2022 Second , instead of predicting the value of extremal rat\nings , we try to classify them as either good or bad .\nFor every hotel we take the sequence of reports , and for each report ( regardless of it value ) we classify it as being good or bad However , to perform these tests , we need to estimate the objective value , vf , that is the average of the true quality observations , vif .\nThe algorithm we are using is based on the intuition that the amount of conformity rating is minimized .\nIn other words , the value vf should be such that as often as possible , bad ratings follow expectations above vf and good ratings follow expectations below vf .\nFormally , we define the sets :\nthat correspond to irregularities where even though the expectation at point i is lower than the delivered value , the rating is poor , and vice versa .\nWe define vf as the value that minimize these union of the two sets :\nIn Eq .\n( 2 ) we replace vif by the value vf computed in Eq .\n( 3 ) , and use the following distance function : \u2713 d ( vf , ef ( i ) | wi f ) = | vf \u2212 ef ( i ) | | vf 2 \u2212 ef ( i ) 2 | \u00b7 ( 1 + 2wif ) ; vf \u2212 ef ( i ) The constant c \u2208 I was set to min { max { ef ( i ) , 3 } } , 4 } .\nThe values for \u03b4f were fixed at { 0.7 , 0.7 , 0.8 , 0.7 , 0.6 } for the features { Overall , Rooms , Service , Cleanliness , Value } respectively .\nThe weights are computed as described in Section 3 .\nAs a first experiment , we take the sets of `` extremal '' ratings { rif | rif \u2208 / I } for each hotel and feature .\nFor every such rating , rif , we try to estimate it by computing \u02c6rif using Eq .\n( 2 ) .\nWe compare this estimator with the one obtained by simply averaging the ratings over all hotels and features :\nTable 7 presents the ratio between the root mean square error ( RMSE ) when using \u02c6rif and \u00af rf to estimate the actual ratings .\nIn all cases the estimate produced by our model is better than the simple average .\nTable 7 : Average of RMSE ( \u02c6rf )\nAs a second experiment , we try to distinguish the sets Bf = { i | rif \u2208 B } and Gf = { i | rif \u2208 G } of bad , respectively good ratings on the feature f. For example , we compute the set Bf using the following classifier ( called \u03c3 ) : rif E Bf ( \u03c3f ( i ) = 1 ) q \u02c6rif < 4 ; Tables 8 , 9 and 10 present the Precision ( p ) , Recall ( r ) and\njority classifier , \u03c4 , \u03c4f ( i ) = 1 \u21d4 | Bf | \u2265 | Gf | : We see that recall is always higher for \u03c3 and precision is usually slightly worse .\nFor the s metric \u03c3 tends to add a\nTable 8 : Precision ( p ) , Recall ( r ) , s = 2pr\nTable 9 : Precision ( p ) , Recall ( r ) , s = 2pr\n1-20 % improvement over \u03c4 , much higher in some cases for hotels in Sydney .\nThis is likely because Sydney reviews are more positive than those of the American cities and cases where the number of bad reviews exceeded the number of good ones are rare .\nReplacing the test algorithm with one that plays a 1 with probability equal to the proportion of bad reviews improves its results for this city , but it is still outperformed by around 80 % .\n6 .\nSUMMARY OF RESULTS AND CONCLUSION\nThe goal of this paper is to explore the factors that drive a user to submit a particular rating , rather than the incentives that encouraged him to submit a report in the first place .\nFor that we use two additional sources of information besides the vector of numerical ratings : first we look at the textual comments that accompany the reviews , and second we consider the reports that have been previously submitted by other users .\nUsing simple natural language processing algorithms , we were able to establish a correlation between the weight of a certain feature in the textual comment accompanying the review , and the noise present in the numerical rating .\nSpecifically , it seems that users who discuss amply a certain feature are likely to agree on a common rating .\nThis observation allows the construction of feature-by-feature estimators of quality that have a lower variance , and are hopefully less noisy .\nNevertheless , further evidence is required to support the intuition that ratings corresponding to high weights are expert opinions that deserve to be given higher priority when computing estimates of quality .\nSecond , we emphasize the dependence of ratings on previous reports .\nPrevious reports create an expectation of quality which affects the subjective perception of the user .\nWe validate two facts about the hotel reviews we collected from TripAdvisor : First , the ratings following low expectations ( where the expectation is computed as the average of the previous reports ) are likely to be higher than the ratings\nTable 10 : Precision ( p ) , Recall ( r ) , s = 2pr\nfollowing high expectations .\nIntuitively , the perception of quality ( and consequently the rating ) depends on how well the actual experience of the user meets her expectation .\nSecond , we include evidence from the textual comments , and find that when users devote a large fraction of the text to discussing a certain feature , they are likely to motivate a divergent rating ( i.e. , a rating that does not conform to the prior expectation ) .\nIntuitively , this supports the hypothesis that review forums act as discussion groups where users are keen on presenting and motivating their own opinion .\nWe have captured the empirical evidence in a behavior model that predicts the ratings submitted by the users .\nThe final rating depends , as expected , on the true observation , and on the gap between the observation and the expectation .\nThe gap tends to have a bigger influence when an important fraction of the textual comment is dedicated to discussing a certain feature .\nThe proposed model was validated on the empirical data and provides better estimates of the ratings actually submitted .\nOne assumption that we make is about the existence of an objective quality value vf for the feature f .\nThis is rarely true , especially over large spans of time .\nOther explanations might account for the correlation of ratings with past reports .\nFor example , if ef ( i ) reflects the true value of f at a point in time , the difference in the ratings following high and low expectations can be explained by hotel revenue models that are maximized when the value is modified accordingly .\nHowever , the idea that variation in ratings is not primarily a function of variation in value turns out to be a useful one .\nOur approach to approximate this elusive ' objective value ' is by no means perfect , but conforms neatly to the idea behind the model .\nA natural direction for future work is to examine concrete applications of our results .\nSignificant improvements of quality estimates are likely to be obtained by incorporating all empirical evidence about rating behavior .\nExactly how different factors affect the decisions of the users is not clear .\nThe answer might depend on the particular application , context and culture .\n7 .\nREFERENCES\nAPPENDIX A. LIST OF WORDS , LR , ASSOCIATED TO THE FEATURE ROOMS\nAll words serve as prefixes : room , space , interior , decor , ambiance , atmosphere , comfort , bath , toilet , bed , building , wall , window , private , temperature , sheet , linen , pillow , hot , water , cold , water , shower , lobby , furniture , carpet , air , condition , mattress , layout , design , mirror , ceiling , lighting , lamp , sofa , chair , dresser , wardrobe , closet"} {"id": "H-29", "title": "", "abstract": "", "keyphrases": ["feedback method", "posterior distribut", "enhanc feedback model", "inform retriev", "queri expans", "probabl distribut", "pseudo-relev feedback", "vector space-base algorithm", "risk", "feedback model", "estim uncertainti", "languag model", "feedback distribut"], "prmu": [], "lvl-1": "Estimation and Use of Uncertainty in Pseudo-relevance Feedback Kevyn Collins-Thompson and Jamie Callan Language Technologies Institute School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213-8213 U.S.A. {kct | callan}@cs.\ncmu.edu ABSTRACT Existing pseudo-relevance feedback methods typically perform averaging over the top-retrieved documents, but ignore an important statistical dimension: the risk or variance associated with either the individual document models, or their combination.\nTreating the baseline feedback method as a black box, and the output feedback model as a random variable, we estimate a posterior distribution for the feedback model by resampling a given query``s top-retrieved documents, using the posterior mean or mode as the enhanced feedback model.\nWe then perform model combination over several enhanced models, each based on a slightly modified query sampled from the original query.\nWe find that resampling documents helps increase individual feedback model precision by removing noise terms, while sampling from the query improves robustness (worst-case performance) by emphasizing terms related to multiple query aspects.\nThe result is a meta-feedback algorithm that is both more robust and more precise than the original strong baseline method.\nCategories and Subject Descriptors: H.3.3 [Information Retrieval]: Retrieval Models General Terms: Algorithms, Experimentation 1.\nINTRODUCTION Uncertainty is an inherent feature of information retrieval.\nNot only do we not know the queries that will be presented to our retrieval algorithm ahead of time, but the user``s information need may be vague or incompletely specified by these queries.\nEven if the query were perfectly specified, language in the collection documents is inherently complex and ambiguous and matching such language effectively is a formidable problem by itself.\nWith this in mind, we wish to treat many important quantities calculated by the retrieval system, whether a relevance score for a document, or a weight for a query expansion term, as random variables whose true value is uncertain but where the uncertainty about the true value may be quantified by replacing the fixed value with a probability distribution over possible values.\nIn this way, retrieval algorithms may attempt to quantify the risk or uncertainty associated with their output rankings, or improve the stability or precision of their internal calculations.\nCurrent algorithms for pseudo-relevance feedback (PRF) tend to follow the same basic method whether we use vector space-based algorithms such as Rocchio``s formula [16], or more recent language modeling approaches such as Relevance Models [10].\nFirst, a set of top-retrieved documents is obtained from an initial query and assumed to approximate a set of relevant documents.\nNext, a single feedback model vector is computed according to some sort of average, centroid, or expectation over the set of possibly-relevant document models.\nFor example, the document vectors may be combined with equal weighting, as in Rocchio, or by query likelihood, as may be done using the Relevance Model1 .\nThe use of an expectation is reasonable for practical and theoretical reasons, but by itself ignores potentially valuable information about the risk of the feedback model.\nOur main hypothesis in this paper is that estimating the uncertainty in feedback is useful and leads to better individual feedback models and more robust combined models.\nTherefore, we propose a method for estimating uncertainty associated with an individual feedback model in terms of a posterior distribution over language models.\nTo do this, we systematically vary the inputs to the baseline feedback method and fit a Dirichlet distribution to the output.\nWe use the posterior mean or mode as the improved feedback model estimate.\nThis process is shown in Figure 1.\nAs we show later, the mean and mode may vary significantly from the single feedback model proposed by the baseline method.\nWe also perform model combination using several improved feedback language models obtained by a small number of new queries sampled from the original query.\nA model``s weight combines two complementary factors: the model``s probability of generating the query, and the variance of the model, with high-variance models getting lower weight.\n1 For example, an expected parameter vector conditioned on the query observation is formed from top-retrieved documents, which are treated as training strings (see [10], p. 62).\nFigure 1: Estimating the uncertainty of the feedback model for a single query.\n2.\nSAMPLING-BASED FEEDBACK In Sections 2.1-2.5 we describe a general method for estimating a probability distribution over the set of possible language models.\nIn Sections 2.6 and 2.7 we summarize how different query samples are used to generate multiple feedback models, which are then combined.\n2.1 Modeling Feedback Uncertainty Given a query Q and a collection C, we assume a probabilistic retrieval system that assigns a real-valued document score f(D, Q) to each document D in C, such that the score is proportional to the estimated probability of relevance.\nWe make no other assumptions about f(D, Q).\nThe nature of f(D, Q) may be complex: for example, if the retrieval system supports structured query languages [12], then f(D, Q) may represent the output of an arbitrarily complex inference network defined by the structured query operators.\nIn theory, the scoring function can vary from query to query, although in this study for simplicity we keep the scoring function the same for all queries.\nOur specific query method is given in Section 3.\nWe treat the feedback algorithm as a black box and assume that the inputs to the feedback algorithm are the original query and the corresponding top-retrieved documents, with a score being given to each document.\nWe assume that the output of the feedback algorithm is a vector of term weights to be used to add or reweight the terms in the representation of the original query, with the vector normalized to form a probability distribution.\nWe view the the inputs to the feedback black box as random variables, and analyze the feedback model as a random variable that changes in response to changes in the inputs.\nLike the document scoring function f(D, Q), the feedback algorithm may implement a complex, non-linear scoring formula, and so as its inputs vary, the resulting feedback models may have a complex distribution over the space of feedback models (the sample space).\nBecause of this potential complexity, we do not attempt to derive a posterior distribution in closed form, but instead use simulation.\nWe call this distribution over possible feedback models the feedback model distribution.\nOur goal in this section is to estimate a useful approximation to the feedback model distribution.\nFor a specific framework for experiments, we use the language modeling (LM) approach for information retrieval [15].\nThe score of a document D with respect to a query Q and collection C is given by p(Q|D) with respect to language models \u02c6\u03b8Q and \u02c6\u03b8D estimated for the query and document respectively.\nWe denote the set of k top-retrieved documents from collection C in response to Q by DQ(k, C).\nFor simplicity, we assume that queries and documents are generated by multinomial distributions whose parameters are represented by unigram language models.\nTo incorporate feedback in the LM approach, we assume a model-based scheme in which our goal is take the query and resulting ranked documents DQ(k, C) as input, and output an expansion language model \u02c6\u03b8E, which is then interpolated with the original query model \u02c6\u03b8Q: \u02c6\u03b8New = (1 \u2212 \u03b1) \u00b7 \u02c6\u03b8Q + \u03b1 \u00b7 \u02c6\u03b8E (1) This includes the possibility of \u03b1 = 1 where the original query mode is completely replaced by the feedback model.\nOur sample space is the set of all possible language models LF that may be output as feedback models.\nOur approach is to take samples from this space and then fit a distribution to the samples using maximum likelihood.\nFor simplicity, we start by assuming the latent feedback distribution has the form of a Dirichlet distribution.\nAlthough the Dirichlet is a unimodal distribution, and in general quite limited in its expressiveness in the sample space, it is a natural match for the multinomial language model, can be estimated quickly, and can capture the most salient features of confident and uncertain feedback models, such as the overall spread of the distibution.\n2.2 Resampling document models We would like an approximation to the posterior distribution of the feedback model LF .\nTo accomplish this, we apply a widely-used simulation technique called bootstrap sampling ([7], p. 474) on the input parameters, namely, the set of top-retrieved documents.\nBootstrap sampling allows us to simulate the approximate effect of perturbing the parameters within the black box feedback algorithm by perturbing the inputs to that algorithm in a systematic way, while making no assumptions about the nature of the feedback algorithm.\nSpecifically, we sample k documents with replacement from DQ(k, C), and calculate an expansion language model \u03b8b using the black box feedback method.\nWe repeat this process B times to obtain a set of B feedback language models, to which we then fit a Dirichlet distribution.\nTypically B is in the range of 20 to 50 samples, with performance being relatively stable in this range.\nNote that instead of treating each top document as equally likely, we sample according to the estimated probabilities of relevance of each document in DQ(k, C).\nThus, a document is more likely to be chosen the higher it is in the ranking.\n2.3 Justification for a sampling approach The rationale for our sampling approach has two parts.\nFirst, we want to improve the quality of individual feedback models by smoothing out variation when the baseline feedback model is unstable.\nIn this respect, our approach resembles bagging [4], an ensemble approach which generates multiple versions of a predictor by making bootstrap copies of the training set, and then averages the (numerical) predictors.\nIn our application, top-retrieved documents can be seen as a kind of noisy training set for relevance.\nSecond, sampling is an effective way to estimate basic properties of the feedback posterior distribution, which can then be used for improved model combination.\nFor example, a model may be weighted by its prediction confidence, estimated as a function of the variability of the posterior around the model.\nfoo2-401.\nmap-Dim:5434,Size:12*12units,gaussianneighborhood (a) Topic 401 Foreign minorities, Germany foo2-402.\nmap-Dim:5698,Size:12*12units,gaussianneighborhood (b) Topic 402 Behavioral genetics foo2-459.\nmap-Dim:8969,Size:12*12units,gaussianneighborhood (c) Topic 459 When can a lender foreclose on property Figure 2: Visualization of expansion language model variance using self-organizing maps, showing the distribution of language models that results from resampling the inputs to the baseline expansion method.\nThe language model that would have been chosen by the baseline expansion is at the center of each map.\nThe similarity function is JensenShannon divergence.\n2.4 Visualizing feedback distributions Before describing how we fit and use the Dirichlet distribution over feedback models, it is instructive to view some examples of actual feedback model distributions that result from bootstrap sampling the top-retrieved documents from different TREC topics.\nEach point in our sample space is a language model, which typically has several thousand dimensions.\nTo help analyze the behavior of our method we used a Self-Organizing Map (via the SOM-PAK package [9]), to `flatten'' and visualize the high-dimensional density function2 .\nThe density maps for three TREC topics are shown in Figure 2 above.\nThe dark areas represent regions of high similarity between language models.\nThe light areas represent regions of low similarity - the `valleys'' between clusters.\nEach diagram is centered on the language model that would have been chosen by the baseline expansion.\nA single peak (mode) is evident in some examples, but more complex structure appears in others.\nAlso, while the distribution is usually close to the baseline feedback model, for some topics they are a significant distance apart (as measured by JensenShannon divergence), as in Subfigure 2c.\nIn such cases, the mode or mean of the feedback distribution often performs significantly better than the baseline (and in a smaller proportion of cases, significantly worse).\n2.5 Fitting a posterior feedback distribution After obtaining feedback model samples by resampling the feedback model inputs, we estimate the feedback distribution.\nWe assume that the multinomial feedback models {\u02c6\u03b81, ... , \u02c6\u03b8B} were generated by a latent Dirichlet distribution with parameters {\u03b11, ... , \u03b1N }.\nTo estimate the {\u03b11, ... , \u03b1N }, we fit the Dirichlet parameters to the B language model samples according to maximum likelihood using a generalized Newton procedure, details of which are given in Minka [13].\nWe assume a simple Dirichlet prior over the {\u03b11, ... , \u03b1N }, setting each to \u03b1i = \u03bc \u00b7 p(wi | C), where \u03bc is a parameter and p(\u00b7 | C) is the collection language model estimated from a set of documents from collection C.\nThe parameter fitting converges very quickly - typically just 2 or 2 Because our points are language models in the multinomial simplex, we extended SOM-PAK to support JensenShannon divergence, a widely-used similarity measure between probability distributions.\n3 iterations are enough - so that it is practical to apply at query-time when computational overhead must be small.\nIn practice, we can restrict the calculation to the vocabulary of the top-retrieved documents, instead of the entire collection.\nNote that for this step we are re-using the existing retrieved documents and not performing additional queries.\nGiven the parameters of an N-dimensional Dirichlet distribution Dir(\u03b1) the mean \u03bc and mode x vectors are easy to calculate and are given respectively by \u03bci = \u03b1iP \u03b1i (2) and xi = \u03b1i\u22121P \u03b1i\u2212N .\n(3) We can then choose the language model at the mean or the mode of the posterior as the final enhanced feedback model.\n(We found the mode to give slightly better performance.)\nFor information retrieval, the number of samples we will have available is likely to be quite small for performance reasons - usually less than ten.\nMoreover, while random sampling is useful in certain cases, it is perfectly acceptable to allow deterministic sampling distributions, but these must be designed carefully in order to approximate an accurate output variance.\nWe leave this for future study.\n2.6 Query variants We use the following methods for generating variants of the original query.\nEach variant corresponds to a different assumption about which aspects of the original query may be important.\nThis is a form of deterministic sampling.\nWe selected three simple methods that cover complimentary assumptions about the query.\nNo-expansion Use only the original query.\nThe assumption is that the given terms are a complete description of the information need.\nLeave-one-out A single term is left out of the original query.\nThe assumption is that one of the query terms is a noise term.\nSingle-term A single term is chosen from the original query.\nThis assumes that only one aspect of the query, namely, that represented by the term, is most important.\nAfter generating a variant of the original query, we combine it with the original query using a weight \u03b1SUB so that we do not stray too `far''.\nIn this study, we set \u03b1SUB = 0.5.\nFor example, using the Indri [12] query language, a leave-oneout variant of the initial query that omits the term `ireland'' for TREC topic 404 is: #weight(0.5 #combine(ireland peace talks) 0.5 #combine(peace talks)) 2.7 Combining enhanced feedback models from multiple query variants When using multiple query variants, the resulting enhanced feedback models are combined using Bayesian model combination.\nTo do this, we treat each word as an item to be classified as belonging to a relevant or non-relevant class, and derive a class probability for each word by combining the scores from each query variant.\nEach score is given by that term``s probability in the Dirichlet distribution.\nThe term scores are weighted by the inverse of the variance of the term in the enhanced feedback model``s Dirichlet distribution.\nThe prior probability of a word``s membership in the relevant class is given by the probability of the original query in the entire enhanced expansion model.\n3.\nEVALUATION In this section we present results confirming the usefulness of estimating a feedback model distribution from weighted resampling of top-ranked documents, and of combining the feedback models obtained from different small changes in the original query.\n3.1 General method We evaluated performance on a total of 350 queries derived from four sets of TREC topics: 51-200 (TREC-1&2), 351-400 (TREC-7), 401-450 (TREC-8), and 451-550 (wt10g, TREC-9&10).\nWe chose these for their varied content and document properties.\nFor example, wt10g documents are Web pages with a wide variety of subjects and styles while TREC-1&2 documents are more homogeneous news articles.\nIndexing and retrieval was performed using the Indri system in the Lemur toolkit [12] [1].\nOur queries were derived from the words in the title field of the TREC topics.\nPhrases were not used.\nTo generate the baseline queries passed to Indri, we wrapped the query terms with Indri``s #combine operator.\nFor example, the initial query for topic 404 is: #combine(ireland peace talks) We performed Krovetz stemming for all experiments.\nBecause we found that the baseline (Indri) expansion method performed better using a stopword list with the feedback model, all experiments used a stoplist of 419 common English words.\nHowever, an interesting side-effect of our resampling approach is that it tends to remove many stopwords from the feedback model, making a stoplist less critical.\nThis is discussed further in Section 3.6.\n3.2 Baseline feedback method For our baseline expansion method, we use an algorithm included in Indri 1.0 as the default expansion method.\nThis method first selects terms using a log-odds calculation described by Ponte [14], but assigns final term weights using Lavrenko``s relevance model[10].\nWe chose the Indri method because it gives a consistently strong baseline, is based on a language modeling approach, and is simple to experiment with.\nIn a TREC evaluation using the GOV2 corpus [6], the method was one of the topperforming runs, achieving a 19.8% gain in MAP compared to using unexpanded queries.\nIn this study, it achieves an average gain in MAP of 17.25% over the four collections.\nIndri``s expansion method first calculates a log-odds ratio o(v) for each potential expansion term v given by o(v) = X D log p(v|D) p(v|C) (4) over all documents D containing v, in collection C. Then, the expansion term candidates are sorted by descending o(v), and the top m are chosen.\nFinally, the term weights r(v) used in the expanded query are calculated based on the relevance model r(v) = X D p(q|D)p(v|D) p(v) p(D) (5) The quantity p(q|D) is the probability score assigned to the document in the initial retrieval.\nWe use Dirichlet smoothing of p(v|D) with \u03bc = 1000.\nThis relevance model is then combined with the original query using linear interpolation, weighted by a parameter \u03b1.\nBy default we used the top 50 documents for feedback and the top 20 expansion terms, with the feedback interpolation parameter \u03b1 = 0.5 unless otherwise stated.\nFor example, the baseline expanded query for topic 404 is: #weight(0.5 #combine(ireland peace talks) 0.5 #weight(0.10 ireland 0.08 peace 0.08 northern ...) 3.3 Expansion performance We measure our feedback algorithm``s effectiveness by two main criteria: precision, and robustness.\nRobustness, and the tradeoff between precision and robustness, is analyzed in Section 3.4.\nIn this section, we examine average precision and precision in the top 10 documents (P10).\nWe also include recall at 1,000 documents.\nFor each query, we obtained a set of B feedback models using the Indri baseline.\nEach feedback model was obtained from a random sample of the top k documents taken with replacement.\nFor these experiments, B = 30 and k = 50.\nEach feedback model contained 20 terms.\nOn the query side, we used leave-one-out (LOO) sampling to create the query variants.\nSingle-term query sampling had consistently worse performance across all collections and so our results here focus on LOO sampling.\nWe used the methods described in Section 2 to estimate an enhanced feedback model from the Dirichlet posterior distribution for each query variant, and to combine the feedback models from all the query variants.\nWe call our method `resampling expansion'' and denote it as RS-FB here.\nWe denote the Indri baseline feedback method as Base-FB.\nResults from applying both the baseline expansion method (Base-FB) and resampling expansion (RS-FB) are shown in Table 1.\nWe observe several trends in this table.\nFirst, the average precision of RS-FB was comparable to Base-FB, achieving an average gain of 17.6% compared to using no expansion across the four collections.\nThe Indri baseline expansion gain was 17.25%.\nAlso, the RS-FB method achieved consistent improvements in P10 over Base-FB for every topic set, with an average improvement of 6.89% over Base-FB for all 350 topics.\nThe lowest P10 gain over Base-FB was +3.82% for TREC-7 and the highest was +11.95% for wt10g.\nFinally, both Base-FB and RS-FB also consistently improved recall over using no expansion, with Base-FB achieving better recall than RS-FB for all topic sets.\n3.4 Retrieval robustness We use the term robustness to mean the worst-case average precision performance of a feedback algorithm.\nIdeally, a robust feedback method would never perform worse than using the original query, while often performing better using the expansion.\nTo evaluate robustness in this study, we use a very simple measure called the robustness index (RI)3 .\nFor a set of queries Q, the RI measure is defined as: RI(Q) = n+ \u2212 n\u2212 |Q| (6) where n+ is the number of queries helped by the feedback method and n\u2212 is the number of queries hurt.\nHere, by `helped'' we mean obtaining a higher average precision as a result of feedback.\nThe value of RI ranges from a minimum 3 This is sometimes also called the reliability of improvement index and was used in Sakai et al. [17].\nCollection NoExp Base-FB RS-FB TREC 1&2 AvgP 0.1818 0.2419 (+33.04%) 0.2406 (+32.24%) P10 0.4443 0.4913 (+10.57%) 0.5363 (+17.83%) Recall 15084/37393 19172/37393 15396/37393 TREC 7 AvgP 0.1890 0.2175 (+15.07%) 0.2169 (+14.75%) P10 0.4200 0.4320 (+2.85%) 0.4480 (+6.67%) Recall 2179/4674 2608/4674 2487/4674 TREC 8 AvgP 0.2031 0.2361 (+16.25%) 0.2268 (+11.70%) P10 0.3960 0.4160 (+5.05%) 0.4340 (+9.59%) Recall 2144/4728 2642/4728 2485/4728 wt10g AvgP 0.1741 0.1829 (+5.06%) 0.1946 (+11.78%) P10 0.2760 0.2630 (-4.71%) 0.2960 (+7.24%) Recall 3361/5980 3725/5980 3664/5980 Table 1: Comparison of baseline (Base-FB) feedback and feedback using re-sampling (RS-FB).\nImprovement shown for BaseFB and RS-FB is relative to using no expansion.\n(a) TREC 1&2 (upper curve); TREC 8 (lower curve) (b) TREC 7 (upper curve); wt10g (lower curve) Figure 3: The trade-off between robustness and average precision for different corpora.\nThe x-axis gives the change in MAP over using baseline expansion with \u03b1 = 0.5.\nThe yaxis gives the Robustness Index (RI).\nEach curve through uncircled points shows the RI/MAP tradeoff using the simple small-\u03b1 strategy (see text) as \u03b1 decreases from 0.5 to zero in the direction of the arrow.\nCircled points represent the tradeoffs obtained by resampling feedback for \u03b1 = 0.5.\nCollection N Base-FB RS-FB n\u2212 RI n\u2212 RI TREC 1&2 103 26 +0.495 15 +0.709 TREC 7 46 14 +0.391 10 +0.565 TREC 8 44 12 +0.455 12 +0.455 wt10g 91 48 -0.055 39 +0.143 Combined 284 100 +0.296 76 +0.465 Table 2: Comparison of robustness index (RI) for baseline feedback (Base-FB) vs. resampling feedback (RS-FB).\nAlso shown are the actual number of queries hurt by feedback (n\u2212) for each method and collection.\nQueries for which initial average precision was negligible (\u2264 0.01) were ignored, giving the remaining query count in column N. of \u22121.0, when all queries are hurt by the feedback method, to +1.0 when all queries are helped.\nThe RI measure does not take into account the magnitude or distribution of the amount of change across the set Q. However, it is easy to understand as a general indication of robustness.\nOne obvious way to improve the worst-case performance of feedback is simply to use a smaller fixed \u03b1 interpolation parameter, such as \u03b1 = 0.3, placing less weight on the (possibly risky) feedback model and more on the original query.\nWe call this the `small-\u03b1'' strategy.\nSince we are also reducing the potential gains when the feedback model is `right'', however, we would expect some trade-off between average precision and robustness.\nWe therefore compared the precision/robustness trade-off between our resampling feedback algorithm, and the simple small-\u03b1 method.\nThe results are summarized in Figure 3.\nIn the figure, the curve for each topic set interpolates between trade-off points, beginning at x=0, where \u03b1 = 0.5, and continuing in the direction of the arrow as \u03b1 decreases and the original query is given more and more weight.\nAs expected, robustness continuously increases as we move along the curve, but mean average precision generally drops as the gains from feedback are eliminated.\nFor comparison, the performance of resampling feedback at \u03b1 = 0.5 is shown for each collection as the circled point.\nHigher and to the right is better.\nThis figure shows that resampling feedback gives a somewhat better trade-off than the small-\u03b1 approach for 3 of the 4 collections.\nFigure 4: Histogram showing improved robustness of resampling feedback (RS-FB) over baseline feedback (Base-FB) for all datasets combined.\nQueries are binned by % change in AP compared to the unexpanded query.\nCollection DS + QV DS + No QV TREC 1&2 AvgP 0.2406 0.2547 (+5.86%) P10 0.5263 0.5362 (+1.88%) RI 0.7087 0.6515 (-0.0572) TREC 7 AvgP 0.2169 0.2200 (+1.43%) P10 0.4480 0.4300 (-4.02%) RI 0.5652 0.2609 (-0.3043) TREC 8 AvgP 0.2268 0.2257 (-0.49%) P10 0.4340 0.4200 (-3.23%) RI 0.4545 0.4091 (-0.0454) wt10g AvgP 0.1946 0.1865 (-4.16%) P10 0.2960 0.2680 (-9.46%) RI 0.1429 0.0220 (-0.1209) Table 3: Comparison of resampling feedback using document sampling (DS) with (QV) and without (No QV) combining feedback models from multiple query variants.\nTable 2 gives the Robustness Index scores for Base-FB and RS-FB.\nThe RS-FB feedback method obtained higher robustness than Base-FB on three of the four topic sets, with only slightly worse performance on TREC-8.\nA more detailed view showing the distribution over relative changes in AP is given by the histogram in Figure 4.\nCompared to Base-FB, the RS-FB method achieves a noticable reduction in the number of queries significantly hurt by expansion (i.e. where AP is hurt by 25% or more), while preserving positive gains in AP.\n3.5 Effect of query and document sampling methods Given our algorithm``s improved robustness seen in Section 3.4, an important question is what component of our system is responsible.\nIs it the use of document re-sampling, the use of multiple query variants, or some other factor?\nThe results in Table 3 suggest that the model combination based on query variants may be largely account for the improved robustness.\nWhen query variants are turned off and the original query is used by itself with document sampling, there is little net change in average precision, a small decrease in P10 for 3 out of the 4 topic sets, but a significant drop in robustness for all topic sets.\nIn two cases, the RI measure drops by more than 50%.\nWe also examined the effect of the document sampling method on retrieval effectiveness, using two different strategies.\nThe `uniform weighting'' strategy ignored the relevance scores from the initial retrieval and gave each document in the top k the same probability of selection.\nIn contrast, the `relevance-score weighting'' strategy chose documents with probability proportional to their relevance scores.\nIn this way, documents that were more highly ranked were more likely to be selected.\nResults are shown in Table 4.\nThe relevance-score weighting strategy performs better overall, with significantly higher RI and P10 scores on 3 of the 4 topic sets.\nThe difference in average precision between the methods, however, is less marked.\nThis suggests that uniform weighting acts to increase variance in retrieval results: when initial average precision is high, there are many relevant documents in the top k and uniform sampling may give a more representative relevance model than focusing on the highly-ranked items.\nOn the other hand, when initial precision is low, there are few relevant documents in the bottom ranks and uniform sampling mixes in more of the non-relevant documents.\nFor space reasons we only summarize our findings on sample size here.\nThe number of samples has some effect on precision when less than 10, but performance stabilizes at around 15 to 20 samples.\nWe used 30 samples for our experiments.\nMuch beyond this level, the additional benefits of more samples decrease as the initial score distribution is more closely fit and the processing time increases.\n3.6 The effect of resampling on expansion term quality Ideally, a retrieval model should not require a stopword list when estimating a model of relevance: a robust statistical model should down-weight stopwords automatically depending on context.\nStopwords can harm feedback if selected as feedback terms, because they are typically poor discriminators and waste valuable term slots.\nIn practice, however, because most term selection methods resemble a tf \u00b7 idf type of weighting, terms with low idf but very high tf can sometimes be selected as expansion term candidates.\nThis happens, for example, even with the Relevance Model approach that is part of our baseline feedback.\nTo ensure as strong a baseline as possible, we use a stoplist for all experiments reported here.\nIf we turn off the stopword list, however, we obtain results such as those shown in Table 5 where four of the top ten baseline feedback terms for TREC topic 60 (said, but, their, not) are stopwords using the BaseFB method.\n(The top 100 expansion terms were selected to generate this example.)\nIndri``s method attempts to address the stopword problem by applying an initial step based on Ponte [14] to select less-common terms that have high log-odds of being in the top-ranked documents compared to the whole collection.\nNevertheless, this does not overcome the stopword problem completely, especially as the number of feedback terms grows.\nUsing resampling feedback, however, appears to mitigate Collection QV + Uniform QV + Relevance-score weighting weighting TREC 1&2 AvgP 0.2545 0.2406 (-5.46%) P10 0.5369 0.5263 (-1.97%) RI 0.6212 0.7087 (+14.09%) TREC 7 AvgP 0.2174 0.2169 (-0.23%) P10 0.4320 0.4480 (+3.70%) RI 0.4783 0.5652 (+18.17%) TREC 8 AvgP 0.2267 0.2268 (+0.04%) P10 0.4120 0.4340 (+5.34%) RI 0.4545 0.4545 (+0.00%) wt10g AvgP 0.1808 0.1946 (+7.63%) P10 0.2680 0.2960 (+10.45%) RI 0.0220 0.1099 (+399.5%) Table 4: Comparison of uniform and relevance-weighted document sampling.\nThe percentage change compared to uniform sampling is shown in parentheses.\nQV indicates that query variants were used in both runs.\nBaseline FB p(wi|R) Resampling FB p(wi|R) said 0.055 court 0.026 court 0.055 pay 0.018 pay 0.034 federal 0.012 but 0.026 education 0.011 employees 0.024 teachers 0.010 their 0.024 employees 0.010 not 0.023 case 0.010 federal 0.021 their 0.009 workers 0.020 appeals 0.008 education 0.020 union 0.007 Table 5: Feedback term quality when a stoplist is not used.\nFeedback terms for TREC topic 60: merit pay vs seniority.\nthe effect of stopwords automatically.\nIn the example of Table 5, resampling feedback leaves only one stopword (their) in the top ten.\nWe observed similar feedback term behavior across many other topics.\nThe reason for this effect appears to be the interaction of the term selection score with the top-m term cutoff.\nWhile the presence and even proportion of particular stopwords is fairly stable across different document samples, their relative position in the top-m list is not, as sets of documents with varying numbers of better, lower-frequency term candidates are examined for each sample.\nAs a result, while some number of stopwords may appear in each sampled document set, any given stopword tends to fall below the cutoff for multiple samples, leading to its classification as a high-variance, low-weight feature.\n4.\nRELATED WORK Our approach is related to previous work from several areas of information retrieval and machine learning.\nOur use of query variation was inspired by the work of YomTov et al. [20], Carpineto et al. [5], and Amati et al. [2], among others.\nThese studies use the idea of creating multiple subqueries and then examining the nature of the overlap in the documents and/or expansion terms that result from each subquery.\nModel combination is performed using heuristics.\nIn particular, the studies of Amati et al. and Carpineto et al. investigated combining terms from individual distributional methods using a term-reranking combination heuristic.\nIn a set of TREC topics they found wide average variation in the rank-distance of terms from different expansion methods.\nTheir combination method gave modest positive improvements in average precision.\nThe idea of examining the overlap between lists of suggested terms has also been used in early query expansion approaches.\nXu and Croft``s method of Local Context Analysis (LCA) [19] includes a factor in the empirically-derived weighting formula that causes expansion terms to be preferred that have connections to multiple query terms.\nOn the document side, recent work by Zhou & Croft [21] explored the idea of adding noise to documents, re-scoring them, and using the stability of the resulting rankings as an estimate of query difficulty.\nThis is related to our use of document sampling to estimate the risk of the feedback model built from the different sets of top-retrieved documents.\nSakai et al. [17] proposed an approach to improving the robustness of pseudo-relevance feedback using a method they call selective sampling.\nThe essence of their method is that they allow skipping of some top-ranked documents, based on a clustering criterion, in order to select a more varied and novel set of documents later in the ranking for use by a traditional pseudo-feedback method.\nTheir study did not find significant improvements in either robustness (RI) or MAP on their corpora.\nGreiff, Morgan and Ponte [8] explored the role of variance in term weighting.\nIn a series of simulations that simplified the problem to 2-feature documents, they found that average precision degrades as term frequency variance - high noiseincreases.\nDownweighting terms with high variance resulted in improved average precision.\nThis seems in accord with our own findings for individual feedback models.\nEstimates of output variance have recently been used for improved text classification.\nLee et al. [11] used queryspecific variance estimates of classifier outputs to perform improved model combination.\nInstead of using sampling, they were able to derive closed-form expressions for classifier variance by assuming base classifiers using simple types of inference networks.\nAndo and Zhang proposed a method that they call structural feedback [3] and showed how to apply it to query expansion for the TREC Genomics Track.\nThey used r query variations to obtain R different sets Sr of top-ranked documents that have been intersected with the top-ranked documents obtained from the original query qorig.\nFor each Si, the normalized centroid vector \u02c6wi of the documents is calculated.\nPrincipal component analysis (PCA) is then applied to the \u02c6wi to obtain the matrix \u03a6 of H left singular vectors \u03c6h that are used to obtain the new, expanded query qexp = qorig + \u03a6T \u03a6qorig.\n(7) In the case H = 1, we have a single left singular vector \u03c6: qexp = qorig + (\u03c6T qorig)\u03c6 so that the dot product \u03c6T qorig is a type of dynamic weight on the expanded query that is based on the similarity of the original query to the expanded query.\nThe use of variance as a feedback model quality measure occurs indirectly through the application of PCA.\nIt would be interesting to study the connections between this approach and our own modelfitting method.\nFinally, in language modeling approaches to feedback, Tao and Zhai [18] describe a method for more robust feedback that allows each document to have a different feedback \u03b1.\nThe feedback weights are derived automatically using regularized EM.\nA roughly equal balance of query and expansion model is implied by their EM stopping condition.\nThey propose tailoring the stopping parameter \u03b7 based on a function of some quality measure of feedback documents.\n5.\nCONCLUSIONS We have presented a new approach to pseudo-relevance feedback based on document and query sampling.\nThe use of sampling is a very flexible and powerful device and is motivated by our general desire to extend current models of retrieval by estimating the risk or variance associated with the parameters or output of retrieval processes.\nSuch variance estimates, for example, may be naturally used in a Bayesian framework for improved model estimation and combination.\nApplications such as selective expansion may then be implemented in a principled way.\nWhile our study uses the language modeling approach as a framework for experiments, we make few assumptions about the actual workings of the feedback algorithm.\nWe believe it is likely that any reasonably effective baseline feedback algorithm would benefit from our approach.\nOur results on standard TREC collections show that our framework improves the robustness of a strong baseline feedback method across a variety of collections, without sacrificing average precision.\nIt also gives small but consistent gains in top10 precision.\nIn future work, we envision an investigation into how varying the set of sampling methods used and the number of samples controls the trade-off between robustness, accuracy, and efficiency.\nAcknowledgements We thank Paul Bennett for valuable discussions related to this work, which was supported by NSF grants #IIS-0534345 and #CNS-0454018, and U.S. Dept. of Education grant #R305G03123.\nAny opinions, findings, and conclusions or recommendations expressed in this material are the authors.\nand do not necessarily reflect those of the sponsors.\n6.\nREFERENCES [1] The Lemur toolkit for language modeling and retrieval.\nhttp://www.lemurproject.org.\n[2] G. Amati, C. Carpineto, and G. Romano.\nQuery difficulty, robustness, and selective application of query expansion.\nIn Proc.\nof the 25th European Conf.\non Information Retrieval (ECIR 2004), pages 127-137.\n[3] R. K. Ando and T. Zhang.\nA high-performance semi-supervised learning method for text chunking.\nIn Proc.\nof the 43rd Annual Meeting of the ACL, pages 1-9, June 2005.\n[4] L. Breiman.\nBagging predictors.\nMachine Learning, 24(2):123-140, 1996.\n[5] C. Carpineto, G. Romano, and V. Giannini.\nImproving retrieval feedback with multiple term-ranking function combination.\nACM Trans.\nInfo.\nSystems, 20(3):259 - 290.\n[6] K. Collins-Thompson, P. Ogilvie, and J. Callan.\nInitial results with structured queries and language models on half a terabyte of text.\nIn Proc.\nof 2005 Text REtrieval Conference.\nNIST Special Publication.\n[7] R. O. Duda, P. E. Hart, and D. G. Stork.\nPattern Classification.\nWiley and Sons, 2nd edition, 2001.\n[8] W. R. Greiff, W. T. Morgan, and J. M. Ponte.\nThe role of variance in term weighting for probabilistic information retrieval.\nIn Proc.\nof the 11th Intl..\nConf.\non Info.\nand Knowledge Mgmt.\n(CIKM 2002), pages 252-259.\n[9] T. Kohonen, J. Hynninen, J. Kangas, and J. Laaksonen.\nSOMPAK: The self-organizing map program package.\nTechnical Report A31, Helsinki University of Technology, 1996.\nhttp://www.cis.hut.fi/research/papers/som tr96.ps.Z.\n[10] V. Lavrenko.\nA Generative Theory of Relevance.\nPhD thesis, University of Massachusetts, Amherst, 2004.\n[11] C.-H.\nLee, R. Greiner, and S. Wang.\nUsing query-specific variance estimates to combine Bayesian classifiers.\nIn Proc.\nof the 23rd Intl..\nConf.\non Machine Learning (ICML 2006), pages 529-536.\n[12] D. Metzler and W. B. Croft.\nCombining the language model and inference network approaches to retrieval.\nInfo.\nProcessing and Mgmt., 40(5):735-750, 2004.\n[13] T. Minka.\nEstimating a Dirichlet distribution.\nTechnical report, 2000.\nhttp://research.microsoft.com/ minka/papers/dirichlet.\n[14] J. Ponte.\nAdvances in Information Retrieval, chapter Language models for relevance feedback, pages 73-96.\n2000.\nW.B. Croft, ed.\n[15] J. M. Ponte and W. B. Croft.\nA language modeling approach to information retrieval.\nIn Proc.\nof the 1998 ACM SIGIR Conference on Research and Development in Information Retrieval, pages 275-281.\n[16] J. Rocchio.\nThe SMART Retrieval System, chapter Relevance Feedback in Information Retrieval, pages 313-323.\nPrentice-Hall, 1971.\nG. Salton, ed.\n[17] T. Sakai, T. Manabe, and M. Koyama.\nFlexible pseudo-relevance feedback via selective sampling.\nACM Transactions on Asian Language Information Processing (TALIP), 4(2):111-135, 2005.\n[18] T. Tao and C. Zhai.\nRegularized estimation of mixture models for robust pseudo-relevance feedback.\nIn Proc.\nof the 2006 ACM SIGIR Conference on Research and Development in Information Retrieval, pages 162-169.\n[19] J. Xu and W. B. Croft.\nImproving the effectiveness of information retrieval with local context analysis.\nACM Trans.\nInf.\nSyst., 18(1):79-112, 2000.\n[20] E. YomTov, S. Fine, D. Carmel, and A. Darlow.\nLearning to estimate query difficulty.\nIn Proc.\nof the 2005 ACM SIGIR Conf.\non Research and Development in Information Retrieval, pages 512-519.\n[21] Y. Zhou and W. B. Croft.\nRanking robustness: a novel framework to predict query performance.\nIn Proc.\nof the 15th ACM Intl..\nConf.\non Information and Knowledge Mgmt.\n(CIKM 2006), pages 567-574.", "lvl-3": "Estimation and Use of Uncertainty in Pseudo-relevance Feedback\nABSTRACT\nExisting pseudo-relevance feedback methods typically perform averaging over the top-retrieved documents , but ignore an important statistical dimension : the risk or variance associated with either the individual document models , or their combination .\nTreating the baseline feedback method as a black box , and the output feedback model as a random variable , we estimate a posterior distribution for the feedback model by resampling a given query 's top-retrieved documents , using the posterior mean or mode as the enhanced feedback model .\nWe then perform model combination over several enhanced models , each based on a slightly modified query sampled from the original query .\nWe find that resampling documents helps increase individual feedback model precision by removing noise terms , while sampling from the query improves robustness ( worst-case performance ) by emphasizing terms related to multiple query aspects .\nThe result is a meta-feedback algorithm that is both more robust and more precise than the original strong baseline method .\n1 .\nINTRODUCTION\nUncertainty is an inherent feature of information retrieval .\nNot only do we not know the queries that will be presented to our retrieval algorithm ahead of time , but the user 's information need may be vague or incompletely specified by these queries .\nEven if the query were perfectly specified , language in the collection documents is inherently complex and ambiguous and matching such language effectively is a formidable problem by itself .\nWith this in mind , we wish to treat many important quantities calculated by the re\ntrieval system , whether a relevance score for a document , or a weight for a query expansion term , as random variables whose true value is uncertain but where the uncertainty about the true value may be quantified by replacing the fixed value with a probability distribution over possible values .\nIn this way , retrieval algorithms may attempt to quantify the risk or uncertainty associated with their output rankings , or improve the stability or precision of their internal calculations .\nCurrent algorithms for pseudo-relevance feedback ( PRF ) tend to follow the same basic method whether we use vector space-based algorithms such as Rocchio 's formula [ 16 ] , or more recent language modeling approaches such as Relevance Models [ 10 ] .\nFirst , a set of top-retrieved documents is obtained from an initial query and assumed to approximate a set of relevant documents .\nNext , a single feedback model vector is computed according to some sort of average , centroid , or expectation over the set of possibly-relevant document models .\nFor example , the document vectors may be combined with equal weighting , as in Rocchio , or by query likelihood , as may be done using the Relevance Model ' .\nThe use of an expectation is reasonable for practical and theoretical reasons , but by itself ignores potentially valuable information about the risk of the feedback model .\nOur main hypothesis in this paper is that estimating the uncertainty in feedback is useful and leads to better individual feedback models and more robust combined models .\nTherefore , we propose a method for estimating uncertainty associated with an individual feedback model in terms of a posterior distribution over language models .\nTo do this , we systematically vary the inputs to the baseline feedback method and fit a Dirichlet distribution to the output .\nWe use the posterior mean or mode as the improved feedback model estimate .\nThis process is shown in Figure 1 .\nAs we show later , the mean and mode may vary significantly from the single feedback model proposed by the baseline method .\nWe also perform model combination using several improved feedback language models obtained by a small number of new queries sampled from the original query .\nA model 's weight combines two complementary factors : the model 's probability of generating the query , and the variance of the model , with high-variance models getting lower weight .\n` For example , an expected parameter vector conditioned on the query observation is formed from top-retrieved documents , which are treated as training strings ( see [ 10 ] , p. 62 ) .\nFigure 1 : Estimating the uncertainty of the feedback model for a single query .\n2 .\nSAMPLING-BASED FEEDBACK\n2.1 Modeling Feedback Uncertainty\n2.2 Resampling document models\n2.3 Justification for a sampling approach\n2.4 Visualizing feedback distributions\n2.5 Fitting a posterior feedback distribution\n2.6 Query variants\n2.7 Combining enhanced feedback models from multiple query variants\n3 .\nEVALUATION\n3.1 General method\n3.2 Baseline feedback method\n3.3 Expansion performance\n3.4 Retrieval robustness\n3.5 Effect of query and document sampling methods\n3.6 The effect of resampling on expansion term quality\n4 .\nRELATED WORK\nOur approach is related to previous work from several areas of information retrieval and machine learning .\nOur use of query variation was inspired by the work of YomTov et al. [ 20 ] , Carpineto et al. [ 5 ] , and Amati et al. [ 2 ] , among others .\nThese studies use the idea of creating multiple subqueries and then examining the nature of the overlap in the documents and/or expansion terms that result from each subquery .\nModel combination is performed using heuristics .\nIn particular , the studies of Amati et al. and Carpineto et al. investigated combining terms from individual distributional methods using a term-reranking combination heuristic .\nIn a set of TREC topics they found wide average variation in the rank-distance of terms from different expansion methods .\nTheir combination method gave modest positive improvements in average precision .\nThe idea of examining the overlap between lists of suggested terms has also been used in early query expansion approaches .\nXu and Croft 's method of Local Context Analysis ( LCA ) [ 19 ] includes a factor in the empirically-derived weighting formula that causes expansion terms to be preferred that have connections to multiple query terms .\nOn the document side , recent work by Zhou & Croft [ 21 ] explored the idea of adding noise to documents , re-scoring them , and using the stability of the resulting rankings as an estimate of query difficulty .\nThis is related to our use of document sampling to estimate the risk of the feedback model built from the different sets of top-retrieved documents .\nSakai et al. [ 17 ] proposed an approach to improving the robustness of pseudo-relevance feedback using a method they call selective sampling .\nThe essence of their method is that they allow skipping of some top-ranked documents , based on a clustering criterion , in order to select a more varied and novel set of documents later in the ranking for use by a traditional pseudo-feedback method .\nTheir study did not find significant improvements in either robustness ( RI ) or MAP on their corpora .\nGreiff , Morgan and Ponte [ 8 ] explored the role of variance in term weighting .\nIn a series of simulations that simplified the problem to 2-feature documents , they found that average precision degrades as term frequency variance -- high noise -- increases .\nDownweighting terms with high variance resulted in improved average precision .\nThis seems in accord with our own findings for individual feedback models .\nEstimates of output variance have recently been used for improved text classification .\nLee et al. [ 11 ] used queryspecific variance estimates of classifier outputs to perform improved model combination .\nInstead of using sampling , they were able to derive closed-form expressions for classifier variance by assuming base classifiers using simple types of inference networks .\nAndo and Zhang proposed a method that they call structural feedback [ 3 ] and showed how to apply it to query expansion for the TREC Genomics Track .\nThey used r query\nvariations to obtain R different sets Sr of top-ranked documents that have been intersected with the top-ranked documents obtained from the original query qorig .\nFor each Si , the normalized centroid vector \u02c6wi of the documents is calculated .\nPrincipal component analysis ( PCA ) is then applied to the \u02c6wi to obtain the matrix 4 ) of H left singular vectors \u03c6h that are used to obtain the new , expanded query\nso that the dot product \u03c6T qorig is a type of dynamic weight on the expanded query that is based on the similarity of the original query to the expanded query .\nThe use of variance as a feedback model quality measure occurs indirectly through the application of PCA .\nIt would be interesting to study the connections between this approach and our own modelfitting method .\nFinally , in language modeling approaches to feedback , Tao and Zhai [ 18 ] describe a method for more robust feedback that allows each document to have a different feedback \u03b1 .\nThe feedback weights are derived automatically using regularized EM .\nA roughly equal balance of query and expansion model is implied by their EM stopping condition .\nThey propose tailoring the stopping parameter \u03b7 based on a function of some quality measure of feedback documents .\n5 .\nCONCLUSIONS\nWe have presented a new approach to pseudo-relevance feedback based on document and query sampling .\nThe use of sampling is a very flexible and powerful device and is motivated by our general desire to extend current models of retrieval by estimating the risk or variance associated with the parameters or output of retrieval processes .\nSuch variance estimates , for example , may be naturally used in a Bayesian framework for improved model estimation and combination .\nApplications such as selective expansion may then be implemented in a principled way .\nWhile our study uses the language modeling approach as a framework for experiments , we make few assumptions about the actual workings of the feedback algorithm .\nWe believe it is likely that any reasonably effective baseline feedback algorithm would benefit from our approach .\nOur results on standard TREC collections show that our framework improves the robustness of a strong baseline feedback method across a variety of collections , without sacrificing average precision .\nIt also gives small but consistent gains in top10 precision .\nIn future work , we envision an investigation into how varying the set of sampling methods used and the number of samples controls the trade-off between robustness , accuracy , and efficiency .", "lvl-4": "Estimation and Use of Uncertainty in Pseudo-relevance Feedback\nABSTRACT\nExisting pseudo-relevance feedback methods typically perform averaging over the top-retrieved documents , but ignore an important statistical dimension : the risk or variance associated with either the individual document models , or their combination .\nTreating the baseline feedback method as a black box , and the output feedback model as a random variable , we estimate a posterior distribution for the feedback model by resampling a given query 's top-retrieved documents , using the posterior mean or mode as the enhanced feedback model .\nWe then perform model combination over several enhanced models , each based on a slightly modified query sampled from the original query .\nWe find that resampling documents helps increase individual feedback model precision by removing noise terms , while sampling from the query improves robustness ( worst-case performance ) by emphasizing terms related to multiple query aspects .\nThe result is a meta-feedback algorithm that is both more robust and more precise than the original strong baseline method .\n1 .\nINTRODUCTION\nUncertainty is an inherent feature of information retrieval .\nEven if the query were perfectly specified , language in the collection documents is inherently complex and ambiguous and matching such language effectively is a formidable problem by itself .\nIn this way , retrieval algorithms may attempt to quantify the risk or uncertainty associated with their output rankings , or improve the stability or precision of their internal calculations .\nCurrent algorithms for pseudo-relevance feedback ( PRF ) tend to follow the same basic method whether we use vector space-based algorithms such as Rocchio 's formula [ 16 ] , or more recent language modeling approaches such as Relevance Models [ 10 ] .\nFirst , a set of top-retrieved documents is obtained from an initial query and assumed to approximate a set of relevant documents .\nNext , a single feedback model vector is computed according to some sort of average , centroid , or expectation over the set of possibly-relevant document models .\nFor example , the document vectors may be combined with equal weighting , as in Rocchio , or by query likelihood , as may be done using the Relevance Model ' .\nThe use of an expectation is reasonable for practical and theoretical reasons , but by itself ignores potentially valuable information about the risk of the feedback model .\nOur main hypothesis in this paper is that estimating the uncertainty in feedback is useful and leads to better individual feedback models and more robust combined models .\nTherefore , we propose a method for estimating uncertainty associated with an individual feedback model in terms of a posterior distribution over language models .\nTo do this , we systematically vary the inputs to the baseline feedback method and fit a Dirichlet distribution to the output .\nWe use the posterior mean or mode as the improved feedback model estimate .\nThis process is shown in Figure 1 .\nAs we show later , the mean and mode may vary significantly from the single feedback model proposed by the baseline method .\nWe also perform model combination using several improved feedback language models obtained by a small number of new queries sampled from the original query .\nA model 's weight combines two complementary factors : the model 's probability of generating the query , and the variance of the model , with high-variance models getting lower weight .\n` For example , an expected parameter vector conditioned on the query observation is formed from top-retrieved documents , which are treated as training strings ( see [ 10 ] , p. 62 ) .\nFigure 1 : Estimating the uncertainty of the feedback model for a single query .\n4 .\nRELATED WORK\nOur approach is related to previous work from several areas of information retrieval and machine learning .\nThese studies use the idea of creating multiple subqueries and then examining the nature of the overlap in the documents and/or expansion terms that result from each subquery .\nModel combination is performed using heuristics .\nIn particular , the studies of Amati et al. and Carpineto et al. investigated combining terms from individual distributional methods using a term-reranking combination heuristic .\nIn a set of TREC topics they found wide average variation in the rank-distance of terms from different expansion methods .\nTheir combination method gave modest positive improvements in average precision .\nThe idea of examining the overlap between lists of suggested terms has also been used in early query expansion approaches .\nOn the document side , recent work by Zhou & Croft [ 21 ] explored the idea of adding noise to documents , re-scoring them , and using the stability of the resulting rankings as an estimate of query difficulty .\nThis is related to our use of document sampling to estimate the risk of the feedback model built from the different sets of top-retrieved documents .\nSakai et al. [ 17 ] proposed an approach to improving the robustness of pseudo-relevance feedback using a method they call selective sampling .\nGreiff , Morgan and Ponte [ 8 ] explored the role of variance in term weighting .\nIn a series of simulations that simplified the problem to 2-feature documents , they found that average precision degrades as term frequency variance -- high noise -- increases .\nDownweighting terms with high variance resulted in improved average precision .\nThis seems in accord with our own findings for individual feedback models .\nEstimates of output variance have recently been used for improved text classification .\nLee et al. [ 11 ] used queryspecific variance estimates of classifier outputs to perform improved model combination .\nInstead of using sampling , they were able to derive closed-form expressions for classifier variance by assuming base classifiers using simple types of inference networks .\nAndo and Zhang proposed a method that they call structural feedback [ 3 ] and showed how to apply it to query expansion for the TREC Genomics Track .\nThey used r query\nvariations to obtain R different sets Sr of top-ranked documents that have been intersected with the top-ranked documents obtained from the original query qorig .\nFor each Si , the normalized centroid vector \u02c6wi of the documents is calculated .\nPrincipal component analysis ( PCA ) is then applied to the \u02c6wi to obtain the matrix 4 ) of H left singular vectors \u03c6h that are used to obtain the new , expanded query\nThe use of variance as a feedback model quality measure occurs indirectly through the application of PCA .\nIt would be interesting to study the connections between this approach and our own modelfitting method .\nFinally , in language modeling approaches to feedback , Tao and Zhai [ 18 ] describe a method for more robust feedback that allows each document to have a different feedback \u03b1 .\nThe feedback weights are derived automatically using regularized EM .\nA roughly equal balance of query and expansion model is implied by their EM stopping condition .\nThey propose tailoring the stopping parameter \u03b7 based on a function of some quality measure of feedback documents .\n5 .\nCONCLUSIONS\nWe have presented a new approach to pseudo-relevance feedback based on document and query sampling .\nSuch variance estimates , for example , may be naturally used in a Bayesian framework for improved model estimation and combination .\nWhile our study uses the language modeling approach as a framework for experiments , we make few assumptions about the actual workings of the feedback algorithm .\nWe believe it is likely that any reasonably effective baseline feedback algorithm would benefit from our approach .\nOur results on standard TREC collections show that our framework improves the robustness of a strong baseline feedback method across a variety of collections , without sacrificing average precision .\nIt also gives small but consistent gains in top10 precision .\nIn future work , we envision an investigation into how varying the set of sampling methods used and the number of samples controls the trade-off between robustness , accuracy , and efficiency .", "lvl-2": "Estimation and Use of Uncertainty in Pseudo-relevance Feedback\nABSTRACT\nExisting pseudo-relevance feedback methods typically perform averaging over the top-retrieved documents , but ignore an important statistical dimension : the risk or variance associated with either the individual document models , or their combination .\nTreating the baseline feedback method as a black box , and the output feedback model as a random variable , we estimate a posterior distribution for the feedback model by resampling a given query 's top-retrieved documents , using the posterior mean or mode as the enhanced feedback model .\nWe then perform model combination over several enhanced models , each based on a slightly modified query sampled from the original query .\nWe find that resampling documents helps increase individual feedback model precision by removing noise terms , while sampling from the query improves robustness ( worst-case performance ) by emphasizing terms related to multiple query aspects .\nThe result is a meta-feedback algorithm that is both more robust and more precise than the original strong baseline method .\n1 .\nINTRODUCTION\nUncertainty is an inherent feature of information retrieval .\nNot only do we not know the queries that will be presented to our retrieval algorithm ahead of time , but the user 's information need may be vague or incompletely specified by these queries .\nEven if the query were perfectly specified , language in the collection documents is inherently complex and ambiguous and matching such language effectively is a formidable problem by itself .\nWith this in mind , we wish to treat many important quantities calculated by the re\ntrieval system , whether a relevance score for a document , or a weight for a query expansion term , as random variables whose true value is uncertain but where the uncertainty about the true value may be quantified by replacing the fixed value with a probability distribution over possible values .\nIn this way , retrieval algorithms may attempt to quantify the risk or uncertainty associated with their output rankings , or improve the stability or precision of their internal calculations .\nCurrent algorithms for pseudo-relevance feedback ( PRF ) tend to follow the same basic method whether we use vector space-based algorithms such as Rocchio 's formula [ 16 ] , or more recent language modeling approaches such as Relevance Models [ 10 ] .\nFirst , a set of top-retrieved documents is obtained from an initial query and assumed to approximate a set of relevant documents .\nNext , a single feedback model vector is computed according to some sort of average , centroid , or expectation over the set of possibly-relevant document models .\nFor example , the document vectors may be combined with equal weighting , as in Rocchio , or by query likelihood , as may be done using the Relevance Model ' .\nThe use of an expectation is reasonable for practical and theoretical reasons , but by itself ignores potentially valuable information about the risk of the feedback model .\nOur main hypothesis in this paper is that estimating the uncertainty in feedback is useful and leads to better individual feedback models and more robust combined models .\nTherefore , we propose a method for estimating uncertainty associated with an individual feedback model in terms of a posterior distribution over language models .\nTo do this , we systematically vary the inputs to the baseline feedback method and fit a Dirichlet distribution to the output .\nWe use the posterior mean or mode as the improved feedback model estimate .\nThis process is shown in Figure 1 .\nAs we show later , the mean and mode may vary significantly from the single feedback model proposed by the baseline method .\nWe also perform model combination using several improved feedback language models obtained by a small number of new queries sampled from the original query .\nA model 's weight combines two complementary factors : the model 's probability of generating the query , and the variance of the model , with high-variance models getting lower weight .\n` For example , an expected parameter vector conditioned on the query observation is formed from top-retrieved documents , which are treated as training strings ( see [ 10 ] , p. 62 ) .\nFigure 1 : Estimating the uncertainty of the feedback model for a single query .\n2 .\nSAMPLING-BASED FEEDBACK\nIn Sections 2.1 -- 2.5 we describe a general method for estimating a probability distribution over the set of possible language models .\nIn Sections 2.6 and 2.7 we summarize how different query samples are used to generate multiple feedback models , which are then combined .\n2.1 Modeling Feedback Uncertainty\nGiven a query Q and a collection C , we assume a probabilistic retrieval system that assigns a real-valued document score f ( D , Q ) to each document D in C , such that the score is proportional to the estimated probability of relevance .\nWe make no other assumptions about f ( D , Q ) .\nThe nature of f ( D , Q ) may be complex : for example , if the retrieval system supports structured query languages [ 12 ] , then f ( D , Q ) may represent the output of an arbitrarily complex inference network defined by the structured query operators .\nIn theory , the scoring function can vary from query to query , although in this study for simplicity we keep the scoring function the same for all queries .\nOur specific query method is given in Section 3 .\nWe treat the feedback algorithm as a black box and assume that the inputs to the feedback algorithm are the original query and the corresponding top-retrieved documents , with a score being given to each document .\nWe assume that the output of the feedback algorithm is a vector of term weights to be used to add or reweight the terms in the representation of the original query , with the vector normalized to form a probability distribution .\nWe view the the inputs to the feedback black box as random variables , and analyze the feedback model as a random variable that changes in response to changes in the inputs .\nLike the document scoring function f ( D , Q ) , the feedback algorithm may implement a complex , non-linear scoring formula , and so as its inputs vary , the resulting feedback models may have a complex distribution over the space of feedback models ( the sample space ) .\nBecause of this potential complexity , we do not attempt to derive a posterior distribution in closed form , but instead use simulation .\nWe call this distribution over possible feedback models the feedback model distribution .\nOur goal in this section is to estimate a useful approximation to the feedback model distribution .\nFor a specific framework for experiments , we use the language modeling ( LM ) approach for information retrieval [ 15 ] .\nThe score of a document D with respect to a query Q and collection C is given by p ( Q | D ) with respect to language models \u02c6\u03b8Q and \u02c6\u03b8D estimated for the query and document respectively .\nWe denote the set of k top-retrieved documents from collection C in response to Q by DQ ( k , C ) .\nFor simplicity , we assume that queries and documents are generated by multinomial distributions whose parameters are represented by unigram language models .\nTo incorporate feedback in the LM approach , we assume a model-based scheme in which our goal is take the query and resulting ranked documents DQ ( k , C ) as input , and output an expansion language model \u02c6\u03b8E , which is then interpolated with the original query model \u02c6\u03b8Q :\nThis includes the possibility of \u03b1 = 1 where the original query mode is completely replaced by the feedback model .\nOur sample space is the set of all possible language models LF that may be output as feedback models .\nOur approach is to take samples from this space and then fit a distribution to the samples using maximum likelihood .\nFor simplicity , we start by assuming the latent feedback distribution has the form of a Dirichlet distribution .\nAlthough the Dirichlet is a unimodal distribution , and in general quite limited in its expressiveness in the sample space , it is a natural match for the multinomial language model , can be estimated quickly , and can capture the most salient features of confident and uncertain feedback models , such as the overall spread of the distibution .\n2.2 Resampling document models\nWe would like an approximation to the posterior distribution of the feedback model LF .\nTo accomplish this , we apply a widely-used simulation technique called bootstrap sampling ( [ 7 ] , p. 474 ) on the input parameters , namely , the set of top-retrieved documents .\nBootstrap sampling allows us to simulate the approximate effect of perturbing the parameters within the black box feedback algorithm by perturbing the inputs to that algorithm in a systematic way , while making no assumptions about the nature of the feedback algorithm .\nSpecifically , we sample k documents with replacement from DQ ( k , C ) , and calculate an expansion language model \u03b8b using the black box feedback method .\nWe repeat this process B times to obtain a set of B feedback language models , to which we then fit a Dirichlet distribution .\nTypically B is in the range of 20 to 50 samples , with performance being relatively stable in this range .\nNote that instead of treating each top document as equally likely , we sample according to the estimated probabilities of relevance of each document in DQ ( k , C ) .\nThus , a document is more likely to be chosen the higher it is in the ranking .\n2.3 Justification for a sampling approach\nThe rationale for our sampling approach has two parts .\nFirst , we want to improve the quality of individual feedback models by smoothing out variation when the baseline feedback model is unstable .\nIn this respect , our approach resembles bagging [ 4 ] , an ensemble approach which generates multiple versions of a predictor by making bootstrap copies of the training set , and then averages the ( numerical ) predictors .\nIn our application , top-retrieved documents can be seen as a kind of noisy training set for relevance .\nSecond , sampling is an effective way to estimate basic properties of the feedback posterior distribution , which can then be used for improved model combination .\nFor example , a model may be weighted by its prediction confidence , estimated as a function of the variability of the posterior around the model .\nFigure 2 : Visualization of expansion language model vari\nance using self-organizing maps , showing the distribution of language models that results from resampling the inputs to the baseline expansion method .\nThe language model that would have been chosen by the baseline expansion is at the center of each map .\nThe similarity function is JensenShannon divergence .\n2.4 Visualizing feedback distributions\nBefore describing how we fit and use the Dirichlet distribution over feedback models , it is instructive to view some examples of actual feedback model distributions that result from bootstrap sampling the top-retrieved documents from different TREC topics .\nEach point in our sample space is a language model , which typically has several thousand dimensions .\nTo help analyze the behavior of our method we used a Self-Organizing Map ( via the SOM-PAK package [ 9 ] ) , to ` flatten ' and visualize the high-dimensional density function2 .\nThe density maps for three TREC topics are shown in Figure 2 above .\nThe dark areas represent regions of high similarity between language models .\nThe light areas represent regions of low similarity -- the ` valleys ' between clusters .\nEach diagram is centered on the language model that would have been chosen by the baseline expansion .\nA single peak ( mode ) is evident in some examples , but more complex structure appears in others .\nAlso , while the distribution is usually close to the baseline feedback model , for some topics they are a significant distance apart ( as measured by JensenShannon divergence ) , as in Subfigure 2c .\nIn such cases , the mode or mean of the feedback distribution often performs significantly better than the baseline ( and in a smaller proportion of cases , significantly worse ) .\n2.5 Fitting a posterior feedback distribution\nAfter obtaining feedback model samples by resampling the feedback model inputs , we estimate the feedback distribution .\nWe assume that the multinomial feedback mod\u02c6 ~ 1 , ... , \u02c6 ~ B } were generated by a latent Dirichlet distribution with parameters { \u03b11 , ... , \u03b1N } .\nTo estimate the { \u03b11 , ... , \u03b1N } , we fit the Dirichlet parameters to the B language model samples according to maximum likelihood using a generalized Newton procedure , details of which are given in Minka [ 13 ] .\nWe assume a simple Dirichlet prior over the { \u03b11 , ... , \u03b1N } , setting each to \u03b1i = \u00b5 \u00b7 p ( wi | C ) , where \u00b5 is a parameter and p ( \u00b7 | C ) is the collection language model estimated from a set of documents from collection C .\nThe parameter fitting converges very quickly -- typically just 2 or 2Because our points are language models in the multinomial simplex , we extended SOM-PAK to support JensenShannon divergence , a widely-used similarity measure between probability distributions .\n3 iterations are enough -- so that it is practical to apply at query-time when computational overhead must be small .\nIn practice , we can restrict the calculation to the vocabulary of the top-retrieved documents , instead of the entire collection .\nNote that for this step we are re-using the existing retrieved documents and not performing additional queries .\nGiven the parameters of an N-dimensional Dirichlet distribution Dir ( \u03b1 ) the mean \u00b5 and mode x vectors are easy to calculate and are given respectively by\nWe can then choose the language model at the mean or the mode of the posterior as the final enhanced feedback model .\n( We found the mode to give slightly better performance . )\nFor information retrieval , the number of samples we will have available is likely to be quite small for performance reasons -- usually less than ten .\nMoreover , while random sampling is useful in certain cases , it is perfectly acceptable to allow deterministic sampling distributions , but these must be designed carefully in order to approximate an accurate output variance .\nWe leave this for future study .\n2.6 Query variants\nWe use the following methods for generating variants of the original query .\nEach variant corresponds to a different assumption about which aspects of the original query may be important .\nThis is a form of deterministic sampling .\nWe selected three simple methods that cover complimentary assumptions about the query .\nNo-expansion Use only the original query .\nThe assumption is that the given terms are a complete description of the information need .\nLeave-one-out A single term is left out of the original query .\nThe assumption is that one of the query terms is a noise term .\nSingle-term A single term is chosen from the original query .\nThis assumes that only one aspect of the query , namely , that represented by the term , is most important .\nAfter generating a variant of the original query , we combine it with the original query using a weight \u03b1SUB so that we do not stray too ` far ' .\nIn this study , we set \u03b1SUB = 0.5 .\nFor example , using the Indri [ 12 ] query language , a leave-oneout variant of the initial query that omits the term ` ireland ' for TREC topic 404 is : #weight ( 0.5 #combine ( ireland peace talks ) 0.5 #combine ( peace talks ) )\n2.7 Combining enhanced feedback models from multiple query variants\nWhen using multiple query variants , the resulting enhanced feedback models are combined using Bayesian model combination .\nTo do this , we treat each word as an item to be classified as belonging to a relevant or non-relevant class , and derive a class probability for each word by combining the scores from each query variant .\nEach score is given by that term 's probability in the Dirichlet distribution .\nThe term scores are weighted by the inverse of the variance of the term in the enhanced feedback model 's Dirichlet distribution .\nThe prior probability of a word 's membership in the relevant class is given by the probability of the original query in the entire enhanced expansion model .\n3 .\nEVALUATION\nIn this section we present results confirming the usefulness of estimating a feedback model distribution from weighted resampling of top-ranked documents , and of combining the feedback models obtained from different small changes in the original query .\n3.1 General method\nWe evaluated performance on a total of 350 queries derived from four sets of TREC topics : 51-200 ( TREC-1 & 2 ) , 351-400 ( TREC-7 ) , 401-450 ( TREC-8 ) , and 451-550 ( wt10g , TREC-9 & 10 ) .\nWe chose these for their varied content and document properties .\nFor example , wt10g documents are Web pages with a wide variety of subjects and styles while TREC-1 & 2 documents are more homogeneous news articles .\nIndexing and retrieval was performed using the Indri system in the Lemur toolkit [ 12 ] [ 1 ] .\nOur queries were derived from the words in the title field of the TREC topics .\nPhrases were not used .\nTo generate the baseline queries passed to Indri , we wrapped the query terms with Indri 's #combine operator .\nFor example , the initial query for topic 404 is : #combine ( ireland peace talks ) We performed Krovetz stemming for all experiments .\nBecause we found that the baseline ( Indri ) expansion method performed better using a stopword list with the feedback model , all experiments used a stoplist of 419 common English words .\nHowever , an interesting side-effect of our resampling approach is that it tends to remove many stopwords from the feedback model , making a stoplist less critical .\nThis is discussed further in Section 3.6 .\n3.2 Baseline feedback method\nFor our baseline expansion method , we use an algorithm included in Indri 1.0 as the default expansion method .\nThis method first selects terms using a log-odds calculation described by Ponte [ 14 ] , but assigns final term weights using Lavrenko 's relevance model [ 10 ] .\nWe chose the Indri method because it gives a consistently strong baseline , is based on a language modeling approach , and is simple to experiment with .\nIn a TREC evaluation using the GOV2 corpus [ 6 ] , the method was one of the topperforming runs , achieving a 19.8 % gain in MAP compared to using unexpanded queries .\nIn this study , it achieves an average gain in MAP of 17.25 % over the four collections .\nIndri 's expansion method first calculates a log-odds ratio o ( v ) for each potential expansion term v given by\nover all documents D containing v , in collection C. Then , the expansion term candidates are sorted by descending o ( v ) , and the top m are chosen .\nFinally , the term weights r ( v ) used in the expanded query are calculated based on the relevance model\nThe quantity p ( qID ) is the probability score assigned to the document in the initial retrieval .\nWe use Dirichlet smoothing of p ( vID ) with \u03bc = 1000 .\nThis relevance model is then combined with the original query using linear interpolation , weighted by a parameter \u03b1 .\nBy default we used the top 50 documents for feedback and the top 20 expansion terms , with the feedback interpolation parameter \u03b1 = 0.5 unless otherwise stated .\nFor example , the baseline expanded query for topic 404 is : #weight ( 0.5 #combine ( ireland peace talks ) 0.5 #weight ( 0.10 ireland 0.08 peace 0.08 northern ... )\n3.3 Expansion performance\nWe measure our feedback algorithm 's effectiveness by two main criteria : precision , and robustness .\nRobustness , and the tradeoff between precision and robustness , is analyzed in Section 3.4 .\nIn this section , we examine average precision and precision in the top 10 documents ( P10 ) .\nWe also include recall at 1,000 documents .\nFor each query , we obtained a set of B feedback models using the Indri baseline .\nEach feedback model was obtained from a random sample of the top k documents taken with replacement .\nFor these experiments , B = 30 and k = 50 .\nEach feedback model contained 20 terms .\nOn the query side , we used leave-one-out ( LOO ) sampling to create the query variants .\nSingle-term query sampling had consistently worse performance across all collections and so our results here focus on LOO sampling .\nWe used the methods described in Section 2 to estimate an enhanced feedback model from the Dirichlet posterior distribution for each query variant , and to combine the feedback models from all the query variants .\nWe call our method ` resampling expansion ' and denote it as RS-FB here .\nWe denote the Indri baseline feedback method as Base-FB .\nResults from applying both the baseline expansion method ( Base-FB ) and resampling expansion ( RS-FB ) are shown in Table 1 .\nWe observe several trends in this table .\nFirst , the average precision of RS-FB was comparable to Base-FB , achieving an average gain of 17.6 % compared to using no expansion across the four collections .\nThe Indri baseline expansion gain was 17.25 % .\nAlso , the RS-FB method achieved consistent improvements in P10 over Base-FB for every topic set , with an average improvement of 6.89 % over Base-FB for all 350 topics .\nThe lowest P10 gain over Base-FB was +3.82 % for TREC-7 and the highest was +11.95 % for wt10g .\nFinally , both Base-FB and RS-FB also consistently improved recall over using no expansion , with Base-FB achieving better recall than RS-FB for all topic sets .\n3.4 Retrieval robustness\nWe use the term robustness to mean the worst-case average precision performance of a feedback algorithm .\nIdeally , a robust feedback method would never perform worse than using the original query , while often performing better using the expansion .\nTo evaluate robustness in this study , we use a very simple measure called the robustness index ( RI ) 3 .\nFor a set of queries Q , the RI measure is defined as :\nwhere n + is the number of queries helped by the feedback method and n _ is the number of queries hurt .\nHere , by ` helped ' we mean obtaining a higher average precision as a result of feedback .\nThe value of RI ranges from a minimum 3This is sometimes also called the reliability of improvement index and was used in Sakai et al. [ 17 ] .\nTable 1 : Comparison of baseline ( Base-FB ) feedback and feedback using re-sampling ( RS-FB ) .\nImprovement shown for BaseFB and RS-FB is relative to using no expansion .\nFigure 3 : The trade-off between robustness and average pre\ncision for different corpora .\nThe x-axis gives the change in MAP over using baseline expansion with \u03b1 = 0.5 .\nThe yaxis gives the Robustness Index ( RI ) .\nEach curve through uncircled points shows the RI/MAP tradeoff using the simple small-\u03b1 strategy ( see text ) as \u03b1 decreases from 0.5 to zero in the direction of the arrow .\nCircled points represent the tradeoffs obtained by resampling feedback for \u03b1 = 0.5 .\nTable 2 : Comparison of robustness index ( RI ) for baseline\nfeedback ( Base-FB ) vs. resampling feedback ( RS-FB ) .\nAlso shown are the actual number of queries hurt by feedback ( n _ ) for each method and collection .\nQueries for which initial average precision was negligible ( \u2264 0.01 ) were ignored , giving the remaining query count in column N. of \u2212 1.0 , when all queries are hurt by the feedback method , to +1.0 when all queries are helped .\nThe RI measure does not take into account the magnitude or distribution of the amount of change across the set Q. However , it is easy to understand as a general indication of robustness .\nOne obvious way to improve the worst-case performance of feedback is simply to use a smaller fixed \u03b1 interpolation parameter , such as \u03b1 = 0.3 , placing less weight on the ( possibly risky ) feedback model and more on the original query .\nWe call this the ` small-\u03b1 ' strategy .\nSince we are also reducing the potential gains when the feedback model is ` right ' , however , we would expect some trade-off between average precision and robustness .\nWe therefore compared the precision/robustness trade-off between our resampling feedback algorithm , and the simple small-\u03b1 method .\nThe results are summarized in Figure 3 .\nIn the figure , the curve for each topic set interpolates between trade-off points , beginning at x = 0 , where \u03b1 = 0.5 , and continuing in the direction of the arrow as \u03b1 decreases and the original query is given more and more weight .\nAs expected , robustness continuously increases as we move along the curve , but mean average precision generally drops as the gains from feedback are eliminated .\nFor comparison , the performance of resampling feedback at \u03b1 = 0.5 is shown for each collection as the circled point .\nHigher and to the right is better .\nThis figure shows that resampling feedback gives a somewhat better trade-off than the small-\u03b1 approach for 3 of the 4 collections .\nFigure 4 : Histogram showing improved robustness of resampling feedback ( RS-FB ) over baseline feedback ( Base-FB ) for all datasets combined .\nQueries are binned by % change in AP compared to the unexpanded query .\nTable 3 : Comparison of resampling feedback using docu\nment sampling ( DS ) with ( QV ) and without ( No QV ) combining feedback models from multiple query variants .\nTable 2 gives the Robustness Index scores for Base-FB and RS-FB .\nThe RS-FB feedback method obtained higher robustness than Base-FB on three of the four topic sets , with only slightly worse performance on TREC-8 .\nA more detailed view showing the distribution over relative changes in AP is given by the histogram in Figure 4 .\nCompared to Base-FB , the RS-FB method achieves a noticable reduction in the number of queries significantly hurt by expansion ( i.e. where AP is hurt by 25 % or more ) , while preserving positive gains in AP .\n3.5 Effect of query and document sampling methods\nGiven our algorithm 's improved robustness seen in Section 3.4 , an important question is what component of our system is responsible .\nIs it the use of document re-sampling , the use of multiple query variants , or some other factor ?\nThe results in Table 3 suggest that the model combination based on query variants may be largely account for the improved robustness .\nWhen query variants are turned off and the original query is used by itself with document sampling , there is little net change in average precision , a small decrease in P10 for 3 out of the 4 topic sets , but a significant drop in robustness for all topic sets .\nIn two cases , the RI measure drops by more than 50 % .\nWe also examined the effect of the document sampling method on retrieval effectiveness , using two different strategies .\nThe ` uniform weighting ' strategy ignored the relevance scores from the initial retrieval and gave each document in the top k the same probability of selection .\nIn contrast , the ` relevance-score weighting ' strategy chose documents with probability proportional to their relevance scores .\nIn this way , documents that were more highly ranked were more likely to be selected .\nResults are shown in Table 4 .\nThe relevance-score weighting strategy performs better overall , with significantly higher RI and P10 scores on 3 of the 4 topic sets .\nThe difference in average precision between the methods , however , is less marked .\nThis suggests that uniform weighting acts to increase variance in retrieval results : when initial average precision is high , there are many relevant documents in the top k and uniform sampling may give a more representative relevance model than focusing on the highly-ranked items .\nOn the other hand , when initial precision is low , there are few relevant documents in the bottom ranks and uniform sampling mixes in more of the non-relevant documents .\nFor space reasons we only summarize our findings on sample size here .\nThe number of samples has some effect on precision when less than 10 , but performance stabilizes at around 15 to 20 samples .\nWe used 30 samples for our experiments .\nMuch beyond this level , the additional benefits of more samples decrease as the initial score distribution is more closely fit and the processing time increases .\n3.6 The effect of resampling on expansion term quality\nIdeally , a retrieval model should not require a stopword list when estimating a model of relevance : a robust statistical model should down-weight stopwords automatically depending on context .\nStopwords can harm feedback if selected as feedback terms , because they are typically poor discriminators and waste valuable term slots .\nIn practice , however , because most term selection methods resemble a tf \u00b7 idf type of weighting , terms with low idf but very high tf can sometimes be selected as expansion term candidates .\nThis happens , for example , even with the Relevance Model approach that is part of our baseline feedback .\nTo ensure as strong a baseline as possible , we use a stoplist for all experiments reported here .\nIf we turn off the stopword list , however , we obtain results such as those shown in Table 5 where four of the top ten baseline feedback terms for TREC topic 60 ( said , but , their , not ) are stopwords using the BaseFB method .\n( The top 100 expansion terms were selected to generate this example . )\nIndri 's method attempts to address the stopword problem by applying an initial step based on Ponte [ 14 ] to select less-common terms that have high log-odds of being in the top-ranked documents compared to the whole collection .\nNevertheless , this does not overcome the stopword problem completely , especially as the number of feedback terms grows .\nUsing resampling feedback , however , appears to mitigate\nTable 4 : Comparison of uniform and relevance-weighted document sampling .\nThe percentage change compared to uniform sampling is shown in parentheses .\nQV indicates that query variants were used in both runs .\nTable 5 : Feedback term quality when a stoplist is not used .\nFeedback terms for TREC topic 60 : merit pay vs seniority .\nthe effect of stopwords automatically .\nIn the example of Table 5 , resampling feedback leaves only one stopword ( their ) in the top ten .\nWe observed similar feedback term behavior across many other topics .\nThe reason for this effect appears to be the interaction of the term selection score with the top-m term cutoff .\nWhile the presence and even proportion of particular stopwords is fairly stable across different document samples , their relative position in the top-m list is not , as sets of documents with varying numbers of better , lower-frequency term candidates are examined for each sample .\nAs a result , while some number of stopwords may appear in each sampled document set , any given stopword tends to fall below the cutoff for multiple samples , leading to its classification as a high-variance , low-weight feature .\n4 .\nRELATED WORK\nOur approach is related to previous work from several areas of information retrieval and machine learning .\nOur use of query variation was inspired by the work of YomTov et al. [ 20 ] , Carpineto et al. [ 5 ] , and Amati et al. [ 2 ] , among others .\nThese studies use the idea of creating multiple subqueries and then examining the nature of the overlap in the documents and/or expansion terms that result from each subquery .\nModel combination is performed using heuristics .\nIn particular , the studies of Amati et al. and Carpineto et al. investigated combining terms from individual distributional methods using a term-reranking combination heuristic .\nIn a set of TREC topics they found wide average variation in the rank-distance of terms from different expansion methods .\nTheir combination method gave modest positive improvements in average precision .\nThe idea of examining the overlap between lists of suggested terms has also been used in early query expansion approaches .\nXu and Croft 's method of Local Context Analysis ( LCA ) [ 19 ] includes a factor in the empirically-derived weighting formula that causes expansion terms to be preferred that have connections to multiple query terms .\nOn the document side , recent work by Zhou & Croft [ 21 ] explored the idea of adding noise to documents , re-scoring them , and using the stability of the resulting rankings as an estimate of query difficulty .\nThis is related to our use of document sampling to estimate the risk of the feedback model built from the different sets of top-retrieved documents .\nSakai et al. [ 17 ] proposed an approach to improving the robustness of pseudo-relevance feedback using a method they call selective sampling .\nThe essence of their method is that they allow skipping of some top-ranked documents , based on a clustering criterion , in order to select a more varied and novel set of documents later in the ranking for use by a traditional pseudo-feedback method .\nTheir study did not find significant improvements in either robustness ( RI ) or MAP on their corpora .\nGreiff , Morgan and Ponte [ 8 ] explored the role of variance in term weighting .\nIn a series of simulations that simplified the problem to 2-feature documents , they found that average precision degrades as term frequency variance -- high noise -- increases .\nDownweighting terms with high variance resulted in improved average precision .\nThis seems in accord with our own findings for individual feedback models .\nEstimates of output variance have recently been used for improved text classification .\nLee et al. [ 11 ] used queryspecific variance estimates of classifier outputs to perform improved model combination .\nInstead of using sampling , they were able to derive closed-form expressions for classifier variance by assuming base classifiers using simple types of inference networks .\nAndo and Zhang proposed a method that they call structural feedback [ 3 ] and showed how to apply it to query expansion for the TREC Genomics Track .\nThey used r query\nvariations to obtain R different sets Sr of top-ranked documents that have been intersected with the top-ranked documents obtained from the original query qorig .\nFor each Si , the normalized centroid vector \u02c6wi of the documents is calculated .\nPrincipal component analysis ( PCA ) is then applied to the \u02c6wi to obtain the matrix 4 ) of H left singular vectors \u03c6h that are used to obtain the new , expanded query\nso that the dot product \u03c6T qorig is a type of dynamic weight on the expanded query that is based on the similarity of the original query to the expanded query .\nThe use of variance as a feedback model quality measure occurs indirectly through the application of PCA .\nIt would be interesting to study the connections between this approach and our own modelfitting method .\nFinally , in language modeling approaches to feedback , Tao and Zhai [ 18 ] describe a method for more robust feedback that allows each document to have a different feedback \u03b1 .\nThe feedback weights are derived automatically using regularized EM .\nA roughly equal balance of query and expansion model is implied by their EM stopping condition .\nThey propose tailoring the stopping parameter \u03b7 based on a function of some quality measure of feedback documents .\n5 .\nCONCLUSIONS\nWe have presented a new approach to pseudo-relevance feedback based on document and query sampling .\nThe use of sampling is a very flexible and powerful device and is motivated by our general desire to extend current models of retrieval by estimating the risk or variance associated with the parameters or output of retrieval processes .\nSuch variance estimates , for example , may be naturally used in a Bayesian framework for improved model estimation and combination .\nApplications such as selective expansion may then be implemented in a principled way .\nWhile our study uses the language modeling approach as a framework for experiments , we make few assumptions about the actual workings of the feedback algorithm .\nWe believe it is likely that any reasonably effective baseline feedback algorithm would benefit from our approach .\nOur results on standard TREC collections show that our framework improves the robustness of a strong baseline feedback method across a variety of collections , without sacrificing average precision .\nIt also gives small but consistent gains in top10 precision .\nIn future work , we envision an investigation into how varying the set of sampling methods used and the number of samples controls the trade-off between robustness , accuracy , and efficiency ."} {"id": "H-17", "title": "", "abstract": "", "keyphrases": ["web search engin", "larg-scale invert index", "queri load", "prune index", "onlin search market", "result qualiti degrad", "prune-base perform optim", "prune techniqu", "result comput algorithm", "top-match page", "top search result", "optim size", "invert index", "prune", "correct guarante"], "prmu": [], "lvl-1": "Pruning Policies for Two-Tiered Inverted Index with Correctness Guarantee Alexandros Ntoulas\u2217 Microsoft Search Labs 1065 La Avenida Mountain View, CA 94043, USA antoulas@microsoft.com Junghoo Cho\u2020 UCLA Computer Science Dept. Boelter Hall Los Angeles, CA 90095, USA cho@cs.ucla.edu ABSTRACT The Web search engines maintain large-scale inverted indexes which are queried thousands of times per second by users eager for information.\nIn order to cope with the vast amounts of query loads, search engines prune their index to keep documents that are likely to be returned as top results, and use this pruned index to compute the first batches of results.\nWhile this approach can improve performance by reducing the size of the index, if we compute the top results only from the pruned index we may notice a significant degradation in the result quality: if a document should be in the top results but was not included in the pruned index, it will be placed behind the results computed from the pruned index.\nGiven the fierce competition in the online search market, this phenomenon is clearly undesirable.\nIn this paper, we study how we can avoid any degradation of result quality due to the pruning-based performance optimization, while still realizing most of its benefit.\nOur contribution is a number of modifications in the pruning techniques for creating the pruned index and a new result computation algorithm that guarantees that the top-matching pages are always placed at the top search results, even though we are computing the first batch from the pruned index most of the time.\nWe also show how to determine the optimal size of a pruned index and we experimentally evaluate our algorithms on a collection of 130 million Web pages.\nCategories and Subject Descriptors H.3.1 [Information Storage and Retrieval]: Content Analysis and Indexing; H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval General Terms Algorithms, Measuring, Performance, Design, Experimentation 1.\nINTRODUCTION The amount of information on the Web is growing at a prodigious rate [24].\nAccording to a recent study [13], it is estimated that the Web currently consists of more than 11 billion pages.\nDue to this immense amount of available information, the users are becoming more and more dependent on the Web search engines for locating relevant information on the Web.\nTypically, the Web search engines, similar to other information retrieval applications, utilize a data structure called inverted index.\nAn inverted index provides for the efficient retrieval of the documents (or Web pages) that contain a particular keyword.\nIn most cases, a query that the user issues may have thousands or even millions of matching documents.\nIn order to avoid overwhelming the users with a huge amount of results, the search engines present the results in batches of 10 to 20 relevant documents.\nThe user then looks through the first batch of results and, if she doesn``t find the answer she is looking for, she may potentially request to view the next batch or decide to issue a new query.\nA recent study [16] indicated that approximately 80% of the users examine at most the first 3 batches of the results.\nThat is, 80% of the users typically view at most 30 to 60 results for every query that they issue to a search engine.\nAt the same time, given the size of the Web, the inverted index that the search engines maintain can grow very large.\nSince the users are interested in a small number of results (and thus are viewing a small portion of the index for every query that they issue), using an index that is capable of returning all the results for a query may constitute a significant waste in terms of time, storage space and computational resources, which is bound to get worse as the Web grows larger over time [24].\nOne natural solution to this problem is to create a small index on a subset of the documents that are likely to be returned as the top results (by using, for example, the pruning techniques in [7, 20]) and compute the first batch of answers using the pruned index.\nWhile this approach has been shown to give significant improvement in performance, it also leads to noticeable degradation in the quality of the search results, because the top answers are computed only from the pruned index [7, 20].\nThat is, even if a page should be placed as the top-matching page according to a search engine``s ranking metric, the page may be placed behind the ones contained in the pruned index if the page did not become part of the pruned index for various reasons [7, 20].\nGiven the fierce competition among search engines today this degradation is clearly undesirable and needs to be addressed if possible.\nIn this paper, we study how we can avoid any degradation of search quality due to the above performance optimization while still realizing most of its benefit.\nThat is, we present a number of simple (yet important) changes in the pruning techniques for creating the pruned index.\nOur main contribution is a new answer computation algorithm that guarantees that the top-matching pages (according to the search-engine``s ranking metric) are always placed at the top of search results, even though we are computing the first batch of answers from the pruned index most of the time.\nThese enhanced pruning techniques and answer-computation algorithms are explored in the context of the cluster architecture commonly employed by today``s search engines.\nFinally, we study and present how search engines can minimize the operational cost of answering queries while providing high quality search results.\nIF IF IF IF IF IF IF Ip Ip Ip Ip Ip Ip 5000 queries/sec 5000 queries/sec : 1000 queries/sec : 1000 queries/sec 2nd tier 1st tier (a) (b) Figure 1: (a) Search engine replicates its full index IF to increase query-answering capacity.\n(b) In the 1st tier, small pindexes IP handle most of the queries.\nWhen IP cannot answer a query, it is redirected to the 2nd tier, where the full index IF is used to compute the answer.\n2.\nCLUSTER ARCHITECTURE AND COST SAVINGS FROM A PRUNED INDEX Typically, a search engine downloads documents from the Web and maintains a local inverted index that is used to answer queries quickly.\nInverted indexes.\nAssume that we have collected a set of documents D = {D1, ... , DM } and that we have extracted all the terms T = {t1, ... , tn} from the documents.\nFor every single term ti \u2208 T we maintain a list I(ti) of document IDs that contain ti.\nEvery entry in I(ti) is called a posting and can be extended to include additional information, such as how many times ti appears in a document, the positions of ti in the document, whether ti is bold/italic, etc..\nThe set of all the lists I = {I(t1), ... , I(tn)} is our inverted index.\n2.1 Two-tier index architecture Search engines are accepting an enormous number of queries every day from eager users searching for relevant information.\nFor example, Google is estimated to answer more than 250 million user queries per day.\nIn order to cope with this huge query load, search engines typically replicate their index across a large cluster of machines as the following example illustrates: Example 1 Consider a search engine that maintains a cluster of machines as in Figure 1(a).\nThe size of its full inverted index IF is larger than what can be stored in a single machine, so each copy of IF is stored across four different machines.\nWe also suppose that one copy of IF can handle the query load of 1000 queries/sec.\nAssuming that the search engine gets 5000 queries/sec, it needs to replicate IF five times to handle the load.\nOverall, the search engine needs to maintain 4 \u00d7 5 = 20 machines in its cluster.\n2 While fully replicating the entire index IF multiple times is a straightforward way to scale to a large number of queries, typical query loads at search engines exhibit certain localities, allowing for significant reduction in cost by replicating only a small portion of the full index.\nIn principle, this is typically done by pruning a full index IF to create a smaller, pruned index (or p-index) IP , which contains a subset of the documents that are likely to be returned as top results.\nGiven the p-index, search engines operate by employing a twotier index architecture as we show in Figure 1(b): All incoming queries are first directed to one of the p-indexes kept in the 1st tier.\nIn the cases where a p-index cannot compute the answer (e.g. was unable to find enough documents to return to the user) the query is answered by redirecting it to the 2nd tier, where we maintain a full index IF .\nThe following example illustrates the potential reduction in the query-processing cost by employing this two-tier index architecture.\nExample 2 Assume the same parameter settings as in Example 1.\nThat is, the search engine gets a query load of 5000 queries/sec Algorithm 2.1 Computation of answer with correctness guarantee Input q = ({t1, ... , tn}, [i, i + k]) where {t1, ... , tn}: keywords in the query [i, i + k]: range of the answer to return Procedure (1) (A, C) = ComputeAnswer(q, IP ) (2) If (C = 1) Then (3) Return A (4) Else (5) A = ComputeAnswer(q, IF ) (6) Return A Figure 2: Computing the answer under the two-tier architecture with the result correctness guarantee.\nand every copy of an index (both the full IF and p-index IP ) can handle up to 1000 queries/sec.\nAlso assume that the size of IP is one fourth of IF and thus can be stored on a single machine.\nFinally, suppose that the p-indexes can handle 80% of the user queries by themselves and only forward the remaining 20% queries to IF .\nUnder this setting, since all 5000/sec user queries are first directed to a p-index, five copies of IP are needed in the 1st tier.\nFor the 2nd tier, since 20% (or 1000 queries/sec) are forwarded, we need to maintain one copy of IF to handle the load.\nOverall we need a total of 9 machines (five machines for the five copies of IP and four machines for one copy of IF ).\nCompared to Example 1, this is more than 50% reduction in the number of machines.\n2 The above example demonstrates the potential cost saving achieved by using a p-index.\nHowever, the two-tier architecture may have a significant drawback in terms of its result quality compared to the full replication of IF ; given the fact that the p-index contains only a subset of the data of the full index, it is possible that, for some queries, the p-index may not contain the top-ranked document according to the particular ranking criteria used by the search engine and fail to return it as the top page, leading to noticeable quality degradation in search results.\nGiven the fierce competition in the online search market, search engine operators desperately try to avoid any reduction in search quality in order to maximize user satisfaction.\n2.2 Correctness guarantee under two-tier architecture How can we avoid the potential degradation of search quality under the two-tier architecture?\nOur basic idea is straightforward: We use the top-k result from the p-index only if we know for sure that the result is the same as the top-k result from the full index.\nThe algorithm in Figure 2 formalizes this idea.\nIn the algorithm, when we compute the result from IP (Step 1), we compute not only the top-k result A, but also the correctness indicator function C defined as follows: Definition 1 (Correctness indicator function) Given a query q, the p-index IP returns the answer A together with a correctness indicator function C. C is set to 1 if A is guaranteed to be identical (i.e. same results in the same order) to the result computed from the full index IF .\nIf it is possible that A is different, C is set to 0.\n2 Note that the algorithm returns the result from IP (Step 3) only when it is identical to the result from IF (condition C = 1 in Step 2).\nOtherwise, the algorithm recomputes and returns the result from the full index IF (Step 5).\nTherefore, the algorithm is guaranteed to return the same result as the full replication of IF all the time.\nNow, the real challenge is to find out (1) how we can compute the correctness indicator function C and (2) how we should prune the index to make sure that the majority of queries are handled by IP alone.\nQuestion 1 How can we compute the correctness indicator function C?\nA straightforward way to calculate C is to compute the top-k answer both from IP and IF and compare them.\nThis naive solution, however, incurs a cost even higher than the full replication of IF because the answers are computed twice: once from IP and once from IF .\nIs there any way to compute the correctness indicator function C only from IP without computing the answer from IF ?\nQuestion 2 How should we prune IF to IP to realize the maximum cost saving?\nThe effectiveness of Algorithm 2.1 critically depends on how often the correctness indicator function C is evaluated to be 1.\nIf C = 0 for all queries, for example, the answers to all queries will be computed twice, once from IP (Step 1) and once from IF (Step 5), so the performance will be worse than the full replication of IF .\nWhat will be the optimal way to prune IF to IP , such that C = 1 for a large fraction of queries?\nIn the next few sections, we try to address these questions.\n3.\nOPTIMAL SIZE OF THE P-INDEX Intuitively, there exists a clear tradeoff between the size of IP and the fraction of queries that IP can handle: When IP is large and has more information, it will be able to handle more queries, but the cost for maintaining and looking up IP will be higher.\nWhen IP is small, on the other hand, the cost for IP will be smaller, but more queries will be forwarded to IF , requiring us to maintain more copies of IF .\nGiven this tradeoff, how should we determine the optimal size of IP in order to maximize the cost saving?\nTo find the answer, we start with a simple example.\nExample 3 Again, consider a scenario similar to Example 1, where the query load is 5000 queries/sec, each copy of an index can handle 1000 queries/sec, and the full index spans across 4 machines.\nBut now, suppose that if we prune IF by 75% to IP 1 (i.e., the size of IP 1 is 25% of IF ), IP 1 can handle 40% of the queries (i.e., C = 1 for 40% of the queries).\nAlso suppose that if IF is pruned by 50% to IP 2, IP 2 can handle 80% of the queries.\nWhich one of the IP 1, IP 2 is preferable for the 1st -tier index?\nTo find out the answer, we first compute the number of machines needed when we use IP 1 for the 1st tier.\nAt the 1st tier, we need 5 copies of IP 1 to handle the query load of 5000 queries/sec.\nSince the size of IP 1 is 25% of IF (that requires 4 machines), one copy of IP 1 requires one machine.\nTherefore, the total number of machines required for the 1st tier is 5\u00d71 = 5 (5 copies of IP 1 with 1 machine per copy).\nAlso, since IP 1 can handle 40% of the queries, the 2nd tier has to handle 3000 queries/sec (60% of the 5000 queries/sec), so we need a total of 3\u00d74 = 12 machines for the 2nd tier (3 copies of IF with 4 machines per copy).\nOverall, when we use IP 1 for the 1st tier, we need 5 + 12 = 17 machines to handle the load.\nWe can do similar analysis when we use IP 2 and see that a total of 14 machines are needed when IP 2 is used.\nGiven this result, we can conclude that using IP 2 is preferable.\n2 The above example shows that the cost of the two-tier architecture depends on two important parameters: the size of the p-index and the fraction of the queries that can be handled by the 1st tier index alone.\nWe use s to denote the size of the p-index relative to IF (i.e., if s = 0.2, for example, the p-index is 20% of the size of IF ).\nWe use f(s) to denote the fraction of the queries that a p-index of size s can handle (i.e., if f(s) = 0.3, 30% of the queries return the value C = 1 from IP ).\nIn general, we can expect that f(s) will increase as s gets larger because IP can handle more queries as its size grows.\nIn Figure 3, we show an example graph of f(s) over s. Given the notation, we can state the problem of p-index-size optimization as follows.\nIn formulating the problem, we assume that the number of machines required to operate a two-tier architecture 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Fractionofqueriesguaranteed-f(s) Fraction of index - s Fraction of queries guaranteed per fraction of index Optimal size s=0.16 Figure 3: Example function showing the fraction of guaranteed queries f(s) at a given size s of the p-index.\nis roughly proportional to the total size of the indexes necessary to handle the query load.\nProblem 1 (Optimal index size) Given a query load Q and the function f(s), find the optimal p-index size s that minimizes the total size of the indexes necessary to handle the load Q. 2 The following theorem shows how we can determine the optimal index size.\nTheorem 1 The cost for handling the query load Q is minimal when the size of the p-index, s, satisfies d f(s) d s = 1.\n2 Proof The proof of this and the following theorems is omitted due to space constraints.\nThis theorem shows that the optimal point is when the slope of the f(s) curve is 1.\nFor example, in Figure 3, the optimal size is when s = 0.16.\nNote that the exact shape of the f(s) graph may vary depending on the query load and the pruning policy.\nFor example, even for the same p-index, if the query load changes significantly, fewer (or more) queries may be handled by the p-index, decreasing (or increasing)f(s).\nSimilarly, if we use an effective pruning policy, more queries will be handled by IP than when we use an ineffective pruning policy, increasing f(s).\nTherefore, the function f(s) and the optimal-index size may change significantly depending on the query load and the pruning policy.\nIn our later experiments, however, we find that even though the shape of the f(s) graph changes noticeably between experiments, the optimal index size consistently lies between 10%-30% in most experiments.\n4.\nPRUNING POLICIES In this section, we show how we should prune the full index IF to IP , so that (1) we can compute the correctness indicator function C from IP itself and (2) we can handle a large fraction of queries by IP .\nIn designing the pruning policies, we note the following two localities in the users'' search behavior: 1.\nKeyword locality: Although there are many different words in the document collection that the search engine indexes, a few popular keywords constitute the majority of the query loads.\nThis keyword locality implies that the search engine will be able to answer a significant fraction of user queries even if it can handle only these few popular keywords.\n2.\nDocument locality: Even if a query has millions of matching documents, users typically look at only the first few results [16].\nThus, as long as search engines can compute the first few top-k answers correctly, users often will not notice that the search engine actually has not computed the correct answer for the remaining results (unless the users explicitly request them).\nBased on the above two localities, we now investigate two different types of pruning policies: (1) a keyword pruning policy, which takes advantage of the keyword locality by pruning the whole inverted list I(ti) for unpopular keywords ti``s and (2) a document pruning policy, which takes advantage of the document locality by keeping only a few postings in each list I(ti), which are likely to be included in the top-k results.\nAs we discussed before, we need to be able to compute the correctness indicator function from the pruned index alone in order to provide the correctness guarantee.\nSince the computation of correctness indicator function may critically depend on the particular ranking function used by a search engine, we first clarify our assumptions on the ranking function.\n4.1 Assumptions on ranking function Consider a query q = {t1, t2, ... , tw} that contains a subset of the index terms.\nThe goal of the search engine is to return the documents that are most relevant to query q.\nThis is done in two steps: first we use the inverted index to find all the documents that contain the terms in the query.\nSecond, once we have the relevant documents, we calculate the rank (or score) of each one of the documents with respect to the query and we return to the user the documents that rank the highest.\nMost of the major search engines today return documents containing all query terms (i.e. they use AND-semantics).\nIn order to make our discussions more concise, we will also assume the popular AND-semantics while answering a query.\nIt is straightforward to extend our results to OR-semantics as well.\nThe exact ranking function that search engines employ is a closely guarded secret.\nWhat is known, however, is that the factors in determining the document ranking can be roughly categorized into two classes: Query-dependent relevance.\nThis particular factor of relevance captures how relevant the query is to every document.\nAt a high level, given a document D, for every term ti a search engine assigns a term relevance score tr(D, ti) to D. Given the tr(D, ti) scores for every ti, then the query-dependent relevance of D to the query, noted as tr(D, q), can be computed by combining the individual term relevance values.\nOne popular way for calculating the querydependent relevance is to represent both the document D and the query q using the TF.IDF vector space model [29] and employ a cosine distance metric.\nSince the exact form of tr(D, ti) and tr(D, q) differs depending on the search engine, we will not restrict to any particular form; instead, in order to make our work applicable in the general case, we will make the generic assumption that the query-dependent relevance is computed as a function of the individual term relevance values in the query: tr(D, q) = ftr(tr(D, t1), ... , tr(D, tw)) (1) Query-independent document quality.\nThis is a factor that measures the overall quality of a document D independent of the particular query issued by the user.\nPopular techniques that compute the general quality of a page include PageRank [26], HITS [17] and the likelihood that the page is a spam page [25, 15].\nHere, we will use pr(D) to denote this query-independent part of the final ranking function for document D.\nThe final ranking score r(D, q) of a document will depend on both the query-dependent and query-independent parts of the ranking function.\nThe exact combination of these parts may be done in a variety of ways.\nIn general, we can assume that the final ranking score of a document is a function of its query-dependent and query-independent relevance scores.\nMore formally: r(D, q) = fr(tr(D, q), pr(D)) (2) For example, fr(tr(D, q), pr(D)) may take the form fr(tr(D, q), pr(D)) = \u03b1 \u00b7 tr(D, q) + (1 \u2212 \u03b1) \u00b7 pr(D), thus giving weight \u03b1 to the query-dependent part and the weight 1 \u2212 \u03b1 to the query-independent part.\nIn Equations 1 and 2 the exact form of fr and ftr can vary depending on the search engine.\nTherefore, to make our discussion applicable independent of the particular ranking function used by search engines, in this paper, we will make only the generic assumption that the ranking function r(D, q) is monotonic on its parameters tr(D, t1), ... , tr(D, tw) and pr(D).\nt1 \u2192 D1 D2 D3 D4 D5 D6 t2 \u2192 D1 D2 D3 t3 \u2192 D3 D5 D7 D8 t4 \u2192 D4 D10 t5 \u2192 D6 D8 D9 Figure 4: Keyword and document pruning.\nAlgorithm 4.1 Computation of C for keyword pruning Procedure (1) C = 1 (2) Foreach ti \u2208 q (3) If (I(ti) /\u2208 IP ) Then C = 0 (4) Return C Figure 5: Result guarantee in keyword pruning.\nDefinition 2 A function f(\u03b1, \u03b2, ... , \u03c9) is monotonic if \u2200\u03b11 \u2265 \u03b12, \u2200\u03b21 \u2265 \u03b22, ... \u2200\u03c91 \u2265 \u03c92 it holds that: f(\u03b11, \u03b21, ... , \u03c91) \u2265 f(\u03b12, \u03b22, ... , \u03c92).\nRoughly, the monotonicity of the ranking function implies that, between two documents D1 and D2, if D1 has higher querydependent relevance than D2 and also a higher query-independent score than D2, then D1 should be ranked higher than D2, which we believe is a reasonable assumption in most practical settings.\n4.2 Keyword pruning Given our assumptions on the ranking function, we now investigate the keyword pruning policy, which prunes the inverted index IF horizontally by removing the whole I(ti)``s corresponding to the least frequent terms.\nIn Figure 4 we show a graphical representation of keyword pruning, where we remove the inverted lists for t3 and t5, assuming that they do not appear often in the query load.\nNote that after keyword pruning, if all keywords {t1, ... , tn} in the query q appear in IP , the p-index has the same information as IF as long as q is concerned.\nIn other words, if all keywords in q appear in IP , the answer computed from IP is guaranteed to be the same as the answer computed from IF .\nFigure 5 formalizes this observation and computes the correctness indicator function C for a keyword-pruned index IP .\nIt is straightforward to prove that the answer from IP is identical to that from IF if C = 1 in the above algorithm.\nWe now consider the issue of optimizing the IP such that it can handle the largest fraction of queries.\nThis problem can be formally stated as follows: Problem 2 (Optimal keyword pruning) Given the query load Q and a goal index size s \u00b7 |IF | for the pruned index, select the inverted lists IP = {I(t1), ... , I(th)} such that |IP | \u2264 s \u00b7 |IF | and the fraction of queries that IP can answer (expressed by f(s)) is maximized.\n2 Unfortunately, the optimal solution to the above problem is intractable as we can show by reducing from knapsack (we omit the complete proof).\nTheorem 2 The problem of calculating the optimal keyword pruning is NP-hard.\n2 Given the intractability of the optimal solution, we need to resort to an approximate solution.\nA common approach for similar knapsack problems is to adopt a greedy policy by keeping the items with the maximum benefit per unit cost [9].\nIn our context, the potential benefit of an inverted list I(ti) is the number of queries that can be answered by IP when I(ti) is included in IP .\nWe approximate this number by the fraction of queries in the query load Q that include the term ti and represent it as P(ti).\nFor example, if 100 out of 1000 queries contain the term computer, Algorithm 4.2 Greedy keyword pruning HS Procedure (1) \u2200ti, calculate HS(ti) = P (ti) |I(ti)| .\n(2) Include the inverted lists with the highest HS(ti) values such that |IP | \u2264 s \u00b7 |IF |.\nFigure 6: Approximation algorithm for the optimal keyword pruning.\nAlgorithm 4.3 Global document pruning V SG Procedure (1) Sort all documents Di based on pr(Di) (2) Find the threshold value \u03c4p, such that only s fraction of the documents have pr(Di) > \u03c4p (4) Keep Di in the inverted lists if pr(Di) > \u03c4p Figure 7: Global document pruning based on pr.\nthen P(computer) = 0.1.\nThe cost of including I(ti) in the pindex is its size |I(ti)|.\nThus, in our greedy approach in Figure 6, we include I(ti)``s in the decreasing order of P(ti)/|I(ti)| as long as |IP | \u2264 s \u00b7 |IF |.\nLater in our experiment section, we evaluate what fraction of queries can be handled by IP when we employ this greedy keyword-pruning policy.\n4.3 Document pruning At a high level, document pruning tries to take advantage of the observation that most users are mainly interested in viewing the top few answers to a query.\nGiven this, it is unnecessary to keep all postings in an inverted list I(ti), because users will not look at most of the documents in the list anyway.\nWe depict the conceptual diagram of the document pruning policy in Figure 4.\nIn the figure, we vertically prune postings corresponding to D4, D5 and D6 of t1 and D8 of t3, assuming that these documents are unlikely to be part of top-k answers to user queries.\nAgain, our goal is to develop a pruning policy such that (1) we can compute the correctness indicator function C from IP alone and (2) we can handle the largest fraction of queries with IP .\nIn the next few sections, we discuss a few alternative approaches for document pruning.\n4.3.1 Global PR-based pruning We first investigate the pruning policy that is commonly used by existing search engines.\nThe basic idea for this pruning policy is that the query-independent quality score pr(D) is a very important factor in computing the final ranking of the document (e.g. PageRank is known to be one of the most important factors determining the overall ranking in the search results), so we build the p-index by keeping only those documents whose pr values are high (i.e., pr(D) > \u03c4p for a threshold value \u03c4p).\nThe hope is that most of the top-ranked results are likely to have high pr(D) values, so the answer computed from this p-index is likely to be similar to the answer computed from the full index.\nFigure 7 describes this pruning policy more formally, where we sort all documents Di``s by their respective pr(Di) values and keep a Di in the p-index when its Algorithm 4.4 Local document pruning V SL N: maximum size of a single posting list Procedure (1) Foreach I(ti) \u2208 IF (2) Sort Di``s in I(ti) based on pr(Di) (3) If |I(ti)| \u2264 N Then keep all Di``s (4) Else keep the top-N Di``s with the highest pr(Di) Figure 8: Local document pruning based on pr.\nAlgorithm 4.5 Extended keyword-specific document pruning Procedure (1) For each I(ti) (2) Keep D \u2208 I(ti) if pr(D) > \u03c4pi or tr(D, ti) > \u03c4ti Figure 9: Extended keyword-specific document pruning based on pr and tr.\npr(Di) value is higher than the global threshold value \u03c4p.\nWe refer to this pruning policy as global PR-based pruning (GPR).\nVariations of this pruning policy are possible.\nFor example, we may adjust the threshold value \u03c4p locally for each inverted list I(ti), so that we maintain at least a certain number of postings for each inverted list I(ti).\nThis policy is shown in Figure 8.\nWe refer to this pruning policy as local PR-based pruning (LPR).\nUnfortunately, the biggest shortcoming of this policy is that we can prove that we cannot compute the correctness function C from IP alone when IP is constructed this way.\nTheorem 3 No PR-based document pruning can provide the result guarantee.\n2 Proof Assume we create IP based on the GPR policy (generalizing the proof to LPR is straightforward) and that every document D with pr(D) > \u03c4p is included in IP .\nAssume that the kth entry in the top-k results, has a ranking score of r(Dk, q) = fr(tr(Dk, q), pr(Dk)).\nNow consider another document Dj that was pruned from IP because pr(Dj) < \u03c4p.\nEven so, it is still possible that the document``s tr(Dj, q) value is very high such that r(Dj, q) = fr(tr(Dj, q), pr(Dj)) > r(Dk, q).\nTherefore, under a PR-based pruning policy, the quality of the answer computed from IP can be significantly worse than that from IF and it is not possible to detect this degradation without computing the answer from IF .\nIn the next section, we propose simple yet essential changes to this pruning policy that allows us to compute the correctness function C from IP alone.\n4.3.2 Extended keyword-specific pruning The main problem of global PR-based document pruning policies is that we do not know the term-relevance score tr(D, ti) of the pruned documents, so a document not in IP may have a higher ranking score than the ones returned from IP because of their high tr scores.\nHere, we propose a new pruning policy, called extended keyword-specific document pruning (EKS), which avoids this problem by pruning not just based on the query-independent pr(D) score but also based on the term-relevance tr(D, ti) score.\nThat is, for every inverted list I(ti), we pick two threshold values, \u03c4pi for pr and \u03c4ti for tr, such that if a document D \u2208 I(ti) satisfies pr(D) > \u03c4pi or tr(D, ti) > \u03c4ti, we include it in I(ti) of IP .\nOtherwise, we prune it from IP .\nFigure 9 formally describes this algorithm.\nThe threshold values, \u03c4pi and \u03c4ti, may be selected in a number of different ways.\nFor example, if pr and tr have equal weight in the final ranking and if we want to keep at most N postings in each inverted list I(ti), we may want to set the two threshold values equal to \u03c4i (\u03c4pi = \u03c4ti = \u03c4i) and adjust \u03c4i such that N postings remain in I(ti).\nThis new pruning policy, when combined with a monotonic scoring function, enables us to compute the correctness indicator function C from the pruned index.\nWe use the following example to explain how we may compute C. Example 4 Consider the query q = {t1, t2} and a monotonic ranking function, f(pr(D), tr(D, t1), tr(D, t2)).\nThere are three possible scenarios on how a document D appears in the pruned index IP .\n1.\nD appears in both I(t1) and I(t2) of IP : Since complete information of D appears in IP , we can compute the exact Algorithm 4.6 Computing Answer from IP Input Query q = {t1, ... , tw} Output A: top-k result, C: correctness indicator function Procedure (1) For each Di \u2208 I(t1) \u222a \u00b7 \u00b7 \u00b7 \u222a I(tw) (2) For each tm \u2208 q (3) If Di \u2208 I(tm) (4) tr\u2217(Di, tm) = tr(Di, tm) (5) Else (6) tr\u2217(Di, tm) = \u03c4tm (7) f(Di) = f(pr(Di), tr\u2217(Di, t1), ... , tr\u2217(Di, tn)) (8) A = top-k Di``s with highest f(Di) values (9) C = j 1 if all Di \u2208 A appear in all I(ti), ti \u2208 q 0 otherwise Figure 10: Ranking based on thresholds tr\u03c4 (ti) and pr\u03c4 (ti).\nscore of D based on pr(D), tr(D, t1) and tr(D, t2) values in IP : f(pr(D), tr(D, t1), tr(D, t2)).\n2.\nD appears only in I(t1) but not in I(t2): Since D does not appear in I(t2), we do not know tr(D, t2), so we cannot compute its exact ranking score.\nHowever, from our pruning criteria, we know that tr(D, t2) cannot be larger than the threshold value \u03c4t2.\nTherefore, from the monotonicity of f (Definition 2), we know that the ranking score of D, f(pr(D), tr(D, t1), tr(D, t2)), cannot be larger than f(pr(D), tr(D, t1), \u03c4t2).\n3.\nD does not appear in any list: Since D does not appear at all in IP , we do not know any of the pr(D), tr(D, t1), tr(D, t2) values.\nHowever, from our pruning criteria, we know that pr(D) \u2264 \u03c4p1 and \u2264 \u03c4p2 and that tr(D, t1) \u2264 \u03c4t1 and tr(D, t2) \u2264 \u03c4t2.\nTherefore, from the monotonicity of f, we know that the ranking score of D, cannot be larger than f(min(\u03c4p1, \u03c4p2), \u03c4t1, \u03c4t2).\n2 The above example shows that when a document does not appear in one of the inverted lists I(ti) with ti \u2208 q, we cannot compute its exact ranking score, but we can still compute its upper bound score by using the threshold value \u03c4ti for the missing values.\nThis suggests the algorithm in Figure 10 that computes the top-k result A from IP together with the correctness indicator function C.\nIn the algorithm, the correctness indicator function C is set to one only if all documents in the top-k result A appear in all inverted lists I(ti) with ti \u2208 q, so we know their exact score.\nIn this case, because these documents have scores higher than the upper bound scores of any other documents, we know that no other documents can appear in the top-k.\nThe following theorem formally proves the correctness of the algorithm.\nIn [11] Fagin et al., provides a similar proof in the context of multimedia middleware.\nTheorem 4 Given an inverted index IP pruned by the algorithm in Figure 9, a query q = {t1, ... , tw} and a monotonic ranking function, the top-k result from IP computed by Algorithm 4.6 is the same as the top-k result from IF if C = 1.\n2 Proof Let us assume Dk is the kth ranked document computed from IP according to Algorithm 4.6.\nFor every document Di \u2208 IF that is not in the top-k result from IP , there are two possible scenarios: First, Di is not in the final answer because it was pruned from all inverted lists I(tj), 1 \u2264 j \u2264 w, in IP .\nIn this case, we know that pr(Di) \u2264 min1\u2264j\u2264w\u03c4pj < pr(Dk) and that tr(Di, tj) \u2264 \u03c4tj < tr(Dk, tj), 1 \u2264 j \u2264 w. From the monotonicity assumption, it follows that the ranking score of DI is r(Di) < r(Dk).\nThat is, Di``s score can never be larger than that of Dk.\nSecond, Di is not in the answer because Di is pruned from some inverted lists, say, I(t1), ... , I(tm), in IP .\nLet us assume \u00afr(Di) = f(pr(Di),\u03c4t1,... ,\u03c4tm,tr(Di, tm+1),... ,tr(Di, tw)).\nThen, from tr(Di, tj) \u2264 \u03c4tj(1 \u2264 j \u2264 m) and the monotonicity assumption, 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Fractionofqueriesguaranteed\u2212f(s) Fraction of index \u2212 s Fraction of queries guaranteed per fraction of index queries guaranteed Figure 11: Fraction of guaranteed queries f(s) answered in a keyword-pruned p-index of size s. we know that r(Di) \u2264 \u00afr(Di).\nAlso, Algorithm 4.6 sets C = 1 only when the top-k documents have scores larger than \u00afr(Di).\nTherefore, r(Di) cannot be larger than r(Dk).\n5.\nEXPERIMENTAL EVALUATION In order to perform realistic tests for our pruning policies, we implemented a search engine prototype.\nFor the experiments in this paper, our search engine indexed about 130 million pages, crawled from the Web during March of 2004.\nThe crawl started from the Open Directory``s [10] homepage and proceeded in a breadth-first manner.\nOverall, the total uncompressed size of our crawled Web pages is approximately 1.9 TB, yielding a full inverted index IF of approximately 1.2 TB.\nFor the experiments reported in this section we used a real set of queries issued to Looksmart [22] on a daily basis during April of 2003.\nAfter keeping only the queries containing keywords that were present in our inverted index, we were left with a set of about 462 million queries.\nWithin our query set, the average number of terms per query is 2 and 98% of the queries contain at most 5 terms.\nSome experiments require us to use a particular ranking function.\nFor these, we use the ranking function similar to the one used in [20].\nMore precisely, our ranking function r(D, q) is r(D, q) = prnorm(D) + trnorm(D, q) (3) where prnorm(D) is the normalized PageRank of D computed from the downloaded pages and trnorm(D, q) is the normalized TF.IDF cosine distance of D to q.\nThis function is clearly simpler than the real functions employed by commercial search engines, but we believe for our evaluation this simple function is adequate, because we are not studying the effectiveness of a ranking function, but the effectiveness of pruning policies.\n5.1 Keyword pruning In our first experiment we study the performance of the keyword pruning, described in Section 4.2.\nMore specifically, we apply the algorithm HS of Figure 6 to our full index IF and create a keyword-pruned p-index IP of size s. For the construction of our keyword-pruned p-index we used the query frequencies observed during the first 10 days of our data set.\nThen, using the remaining 20-day query load, we measured f(s), the fraction of queries handled by IP .\nAccording to the algorithm of Figure 5, a query can be handled by IP (i.e., C = 1) if IP includes the inverted lists for all of the query``s keywords.\nWe have repeated the experiment for varying values of s, picking the keywords greedily as discussed in Section 4.2.\nThe result is shown in Figure 11.\nThe horizontal axis denotes the size s of the p-index as a fraction of the size of IF .\nThe vertical axis shows the fraction f(s) of the queries that the p-index of size s can answer.\nThe results of Figure 11, are very encouraging: we can answer a significant fraction of the queries with a small fraction of the original index.\nFor example, approximately 73% of the queries can be answered using 30% of the original index.\nAlso, we find that when we use the keyword pruning policy only, the optimal index size is s = 0.17.\n0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Fractionofqueriesguaranteed-f(s) Fraction of index - s Fraction of queries guaranteed for top-20 per fraction of index fraction of queries guaranteed (EKS) Figure 12: Fraction of guaranteed queries f(s) answered in a document-pruned p-index of size s. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Fractionofqueriesanswered index size - s Fraction of queries answered for top-20 per fraction of index GPR LPR EKS Figure 13: Fraction of queries answered in a document-pruned p-index of size s. 5.2 Document pruning We continue our experimental evaluation by studying the performance of the various document pruning policies described in Section 4.3.\nFor the experiments on document pruning reported here we worked with a 5.5% sample of the whole query set.\nThe reason behind this is merely practical: since we have much less machines compared to a commercial search engine it would take us about a year of computation to process all 462 million queries.\nFor our first experiment, we generate a document-pruned p-index of size s by using the Extended Keyword-Specific pruning (EKS) in Section 4.\nWithin the p-index we measure the fraction of queries that can be guaranteed (according to Theorem 4) to be correct.\nWe have performed the experiment for varying index sizes s and the result is shown in Figure 12.\nBased on this figure, we can see that our document pruning algorithm performs well across the scale of index sizes s: for all index sizes larger than 40%, we can guarantee the correct answer for about 70% of the queries.\nThis implies that our EKS algorithm can successfully identify the necessary postings for calculating the top-20 results for 70% of the queries by using at least 40% of the full index size.\nFrom the figure, we can see that the optimal index size s = 0.20 when we use EKS as our pruning policy.\nWe can compare the two pruning schemes, namely the keyword pruning and EKS, by contrasting Figures 11 and 12.\nOur observation is that, if we would have to pick one of the two pruning policies, then the two policies seem to be more or less equivalent for the p-index sizes s \u2264 20%.\nFor the p-index sizes s > 20%, keyword pruning does a much better job as it provides a higher number of guarantees at any given index size.\nLater in Section 5.3, we discuss the combination of the two policies.\nIn our next experiment, we are interested in comparing EKS with the PR-based pruning policies described in Section 4.3.\nTo this end, apart from EKS, we also generated document-pruned pindexes for the Global pr-based pruning (GPR) and the Local prbased pruning (LPR) policies.\nFor each of the polices we created document-pruned p-indexes of varying sizes s.\nSince GPR and LPR cannot provide a correctness guarantee, we will compare the fraction of queries from each policy that are identical (i.e. the same results in the same order) to the top-k results calculated from the full index.\nHere, we will report our results for k = 20; the results are similar for other values of k.\nThe results are shown in Figure 13.\n0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Averagefractionofdocsinanswer index size - s Average fraction of docs in answer for top-20 per fraction of index GPR LPR EKS Figure 14: Average fraction of the top-20 results of p-index with size s contained in top-20 results of the full index.\nFraction of queries guaranteed for top-20 per fraction of index, using keyword and document 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Keyword fraction of index - sh 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Document fraction of index - sv 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Fraction of queries guaranteed - f(s) Figure 15: Combining keyword and document pruning.\nThe horizontal axis shows the size s of the p-index; the vertical axis shows the fraction f(s) of the queries whose top-20 results are identical to the top-20 results of the full index, for a given size s. By observing Figure 13, we can see that GPR performs the worst of the three policies.\nOn the other hand EKS, picks up early, by answering a great fraction of queries (about 62%) correctly with only 10% of the index size.\nThe fraction of queries that LPR can answer remains below that of EKS until about s = 37%.\nFor any index size larger than 37%, LPR performs the best.\nIn the experiment of Figure 13, we applied the strict definition that the results of the p-index have to be in the same order as the ones of the full index.\nHowever, in a practical scenario, it may be acceptable to have some of the results out of order.\nTherefore, in our next experiment we will measure the fraction of the results coming from an p-index that are contained within the results of the full index.\nThe result of the experiment is shown on Figure 14.\nThe horizontal axis is, again, the size s of the p-index; the vertical axis shows the average fraction of the top-20 results common with the top-20 results from the full index.\nOverall, Figure 14 depicts that EKS and LPR identify the same high (\u2248 96%) fraction of results on average for any size s \u2265 30%, with GPR not too far behind.\n5.3 Combining keyword and document pruning In Sections 5.1 and 5.2 we studied the individual performance of our keyword and document pruning schemes.\nOne interesting question however is how do these policies perform in combination?\nWhat fraction of queries can we guarantee if we apply both keyword and document pruning in our full index IF ?\nTo answer this question, we performed the following experiment.\nWe started with the full index IF and we applied keyword pruning to create an index Ih P of size sh \u00b7 100% of IF .\nAfter that, we further applied document pruning to Ih P , and created our final pindex IP of size sv \u00b7100% of Ih P .\nWe then calculated the fraction of guaranteed queries in IP .\nWe repeated the experiment for different values of sh and sv.\nThe result is shown on Figure 15.\nThe x-axis shows the index size sh after applying keyword pruning; the y-axis shows the index size sv after applying document pruning; the z-axis shows the fraction of guaranteed queries after the two prunings.\nFor example the point (0.2, 0.3, 0.4) means that if we apply keyword pruning and keep 20% of IF , and subsequently on the resulting index we apply document pruning keeping 30% (thus creating a pindex of size 20%\u00b730% = 6% of IF ) we can guarantee 40% of the queries.\nBy observing Figure 15, we can see that for p-index sizes smaller than 50%, our combined pruning does relatively well.\nFor example, by performing 40% keyword and 40% document pruning (which translates to a pruned index with s = 0.16) we can provide a guarantee for about 60% of the queries.\nIn Figure 15, we also observe a plateau for sh > 0.5 and sv > 0.5.\nFor this combined pruning policy, the optimal index size is at s = 0.13, with sh = 0.46 and sv = 0.29.\n6.\nRELATED WORK [3, 30] provide a good overview of inverted indexing in Web search engines and IR systems.\nExperimental studies and analyses of various partitioning schemes for an inverted index are presented in [6, 23, 33].\nThe pruning algorithms that we have presented in this paper are independent of the partitioning scheme used.\nThe works in [1, 5, 7, 20, 27] are the most related to ours, as they describe pruning techniques based on the idea of keeping the postings that contribute the most in the final ranking.\nHowever, [1, 5, 7, 27] do not consider any query-independent quality (such as PageRank) in the ranking function.\n[32] presents a generic framework for computing approximate top-k answers with some probabilistic bounds on the quality of results.\nOur work essentially extends [1, 2, 4, 7, 20, 27, 31] by proposing mechanisms for providing the correctness guarantee to the computed top-k results.\nSearch engines use various methods of caching as a means of reducing the cost associated with queries [18, 19, 21, 31].\nThis thread of work is also orthogonal to ours because a caching scheme may operate on top of our p-index in order to minimize the answer computation cost.\nThe exact ranking functions employed by current search engines are closely guarded secrets.\nIn general, however, the rankings are based on query-dependent relevance and queryindependent document quality.\nQuery-dependent relevance can be calculated in a variety of ways (see [3, 30]).\nSimilarly, there are a number of works that measure the quality of the documents, typically as captured through link-based analysis [17, 28, 26].\nSince our work does not assume a particular form of ranking function, it is complementary to this body of work.\nThere has been a great body of work on top-k result calculation.\nThe main idea is to either stop the traversal of the inverted lists early, or to shrink the lists by pruning postings from the lists [14, 4, 11, 8].\nOur proof for the correctness indicator function was primarily inspired by [12].\n7.\nCONCLUDING REMARKS Web search engines typically prune their large-scale inverted indexes in order to scale to enormous query loads.\nWhile this approach may improve performance, by computing the top results from a pruned index we may notice a significant degradation in the result quality.\nIn this paper, we provided a framework for new pruning techniques and answer computation algorithms that guarantee that the top matching pages are always placed at the top of search results in the correct order.\nWe studied two pruning techniques, namely keyword-based and document-based pruning as well as their combination.\nOur experimental results demonstrated that our algorithms can effectively be used to prune an inverted index without degradation in the quality of results.\nIn particular, a keyword-pruned index can guarantee 73% of the queries with a size of 30% of the full index, while a document-pruned index can guarantee 68% of the queries with the same size.\nWhen we combine the two pruning algorithms we can guarantee 60% of the queries with an index size of 16%.\nIt is our hope that our work will help search engines develop better, faster and more efficient indexes and thus provide for a better user search experience on the Web.\n8.\nREFERENCES [1] V. N. Anh, O. de Kretser, and A. Moffat.\nVector-space ranking with effective early termination.\nIn SIGIR, 2001.\n[2] V. N. Anh and A. Moffat.\nPruning strategies for mixed-mode querying.\nIn CIKM, 2006.\n[3] R. A. Baeza-Yates and B. A. Ribeiro-Neto.\nModern Information Retrieval.\nACM Press / Addison-Wesley, 1999.\n[4] N. Bruno, L. Gravano, and A. Marian.\nEvaluating top-k queries over web-accessible databases.\nIn ICDE, 2002.\n[5] S. B\u00a8uttcher and C. L. A. Clarke.\nA document-centric approach to static index pruning in text retrieval systems.\nIn CIKM, 2006.\n[6] B. Cahoon, K. S. McKinley, and Z. Lu.\nEvaluating the performance of distributed architectures for information retrieval using a variety of workloads.\nACM TOIS, 18(1), 2000.\n[7] D. Carmel, D. Cohen, R. Fagin, E. Farchi, M. Herscovici, Y. Maarek, and A. Soffer.\nStatic index pruning for information retrieval systems.\nIn SIGIR, 2001.\n[8] S. Chaudhuri and L. Gravano.\nOptimizing queries over multimedia repositories.\nIn SIGMOD, 1996.\n[9] T. H. Cormen, C. E. Leiserson, and R. L. Rivest.\nIntroduction to Algorithms, 2nd Edition.\nMIT Press/McGraw Hill, 2001.\n[10] Open directory.\nhttp://www.dmoz.org.\n[11] R. Fagin.\nCombining fuzzy information: an overview.\nIn SIGMOD Record, 31(2), 2002.\n[12] R. Fagin, A. Lotem, and M. Naor.\nOptimal aggregation algorithms for middleware.\nIn PODS, 2001.\n[13] A. Gulli and A. Signorini.\nThe indexable web is more than 11.5 billion pages.\nIn WWW, 2005.\n[14] U. Guntzer, G. Balke, and W. Kiessling.\nTowards efficient multi-feature queries in heterogeneous environments.\nIn ITCC, 2001.\n[15] Z. Gy\u00a8ongyi, H. Garcia-Molina, and J. Pedersen.\nCombating web spam with trustrank.\nIn VLDB, 2004.\n[16] B. J. Jansen and A. Spink.\nAn analysis of web documents retrieved and viewed.\nIn International Conf.\non Internet Computing, 2003.\n[17] J. Kleinberg.\nAuthoritative sources in a hyperlinked environment.\nJournal of the ACM, 46(5):604-632, September 1999.\n[18] R. Lempel and S. Moran.\nPredictive caching and prefetching of query results in search engines.\nIn WWW, 2003.\n[19] R. Lempel and S. Moran.\nOptimizing result prefetching in web search engines with segmented indices.\nACM Trans.\nInter.\nTech., 4(1), 2004.\n[20] X. Long and T. Suel.\nOptimized query execution in large search engines with global page ordering.\nIn VLDB, 2003.\n[21] X. Long and T. Suel.\nThree-level caching for efficient query processing in large web search engines.\nIn WWW, 2005.\n[22] Looksmart inc. http://www.looksmart.com.\n[23] S. Melnik, S. Raghavan, B. Yang, and H. Garcia-Molina.\nBuilding a distributed full-text index for the web.\nACM TOIS, 19(3):217-241, 2001.\n[24] A. Ntoulas, J. Cho, C. Olston.\nWhat``s new on the web?\nThe evolution of the web from a search engine perspective.\nIn WWW, 2004.\n[25] A. Ntoulas, M. Najork, M. Manasse, and D. Fetterly.\nDetecting spam web pages through content analysis.\nIn WWW, 2006.\n[26] L. Page, S. Brin, R. Motwani, and T. Winograd.\nThe pagerank citation ranking: Bringing order to the web.\nTechnical report, Stanford University.\n[27] M. Persin, J. Zobel, and R. Sacks-Davis.\nFiltered document retrieval with frequency-sorted indexes.\nJournal of the American Society of Information Science, 47(10), 1996.\n[28] M. Richardson and P. Domingos.\nThe intelligent surfer: Probabilistic combination of link and content information in pagerank.\nIn Advances in Neural Information Processing Systems, 2002.\n[29] S. Robertson and K. Sp\u00a8arck-Jones.\nRelevance weighting of search terms.\nJournal of the American Society for Information Science, 27:129-46, 1976.\n[30] G. Salton and M. J. McGill.\nIntroduction to modern information retrieval.\nMcGraw-Hill, first edition, 1983.\n[31] P. C. Saraiva, E. S. de Moura, N. Ziviani, W. Meira, R. Fonseca, and B. Riberio-Neto.\nRank-preserving two-level caching for scalable search engines.\nIn SIGIR, 2001.\n[32] M. Theobald, G. Weikum, and R. Schenkel.\nTop-k query evaluation with probabilistic guarantees.\nIn VLDB, 2004.\n[33] A. Tomasic and H. Garcia-Molina.\nPerformance of inverted indices in shared-nothing distributed text document information retrieval systems.\nIn Parallel and Distributed Information Systems, 1993.", "lvl-3": "Pruning Policies for Two-Tiered Inverted Index with Correctness Guarantee\nABSTRACT\nThe Web search engines maintain large-scale inverted indexes which are queried thousands of times per second by users eager for information .\nIn order to cope with the vast amounts of query loads , search engines prune their index to keep documents that are likely to be returned as top results , and use this pruned index to compute the first batches of results .\nWhile this approach can improve performance by reducing the size of the index , if we compute the top results only from the pruned index we may notice a significant degradation in the result quality : if a document should be in the top results but was not included in the pruned index , it will be placed behind the results computed from the pruned index .\nGiven the fierce competition in the online search market , this phenomenon is clearly undesirable .\nIn this paper , we study how we can avoid any degradation of result quality due to the pruning-based performance optimization , while still realizing most of its benefit .\nOur contribution is a number of modifications in the pruning techniques for creating the pruned index and a new result computation algorithm that guarantees that the top-matching pages are always placed at the top search results , even though we are computing the first batch from the pruned index most of the time .\nWe also show how to determine the optimal size of a pruned index and we experimentally evaluate our algorithms on a collection of 130 million Web pages .\n1 .\nINTRODUCTION\nThe amount of information on the Web is growing at a prodigious rate [ 24 ] .\nAccording to a recent study [ 13 ] , it is estimated that the \u2217 Work done while author was at UCLA Computer Science Department .\n\u2020 This work is partially supported by NSF grants , IIS-0534784 , IIS0347993 , and CNS-0626702 .\nAny opinions , findings , and conclusions or recommendations expressed in this material are those of the author ( s ) and do not necessarily reflect the views of the funding institutions .\nWeb currently consists of more than 11 billion pages .\nDue to this immense amount of available information , the users are becoming more and more dependent on the Web search engines for locating relevant information on the Web .\nTypically , the Web search engines , similar to other information retrieval applications , utilize a data structure called inverted index .\nAn inverted index provides for the efficient retrieval of the documents ( or Web pages ) that contain a particular keyword .\nIn most cases , a query that the user issues may have thousands or even millions of matching documents .\nIn order to avoid overwhelming the users with a huge amount of results , the search engines present the results in batches of 10 to 20 relevant documents .\nThe user then looks through the first batch of results and , if she does n't find the answer she is looking for , she may potentially request to view the next batch or decide to issue a new query .\nA recent study [ 16 ] indicated that approximately 80 % of the users examine at most the first 3 batches of the results .\nThat is , 80 % of the users typically view at most 30 to 60 results for every query that they issue to a search engine .\nAt the same time , given the size of the Web , the inverted index that the search engines maintain can grow very large .\nSince the users are interested in a small number of results ( and thus are viewing a small portion of the index for every query that they issue ) , using an index that is capable of returning all the results for a query may constitute a significant waste in terms of time , storage space and computational resources , which is bound to get worse as the Web grows larger over time [ 24 ] .\nOne natural solution to this problem is to create a small index on a subset of the documents that are likely to be returned as the top results ( by using , for example , the pruning techniques in [ 7 , 20 ] ) and compute the first batch of answers using the pruned index .\nWhile this approach has been shown to give significant improvement in performance , it also leads to noticeable degradation in the quality of the search results , because the top answers are computed only from the pruned index [ 7 , 20 ] .\nThat is , even if a page should be placed as the top-matching page according to a search engine 's ranking metric , the page may be placed behind the ones contained in the pruned index if the page did not become part of the pruned index for various reasons [ 7 , 20 ] .\nGiven the fierce competition among search engines today this degradation is clearly undesirable and needs to be addressed if possible .\nIn this paper , we study how we can avoid any degradation of search quality due to the above performance optimization while still realizing most of its benefit .\nThat is , we present a number of simple ( yet important ) changes in the pruning techniques for creating the pruned index .\nOur main contribution is a new answer computation algorithm that guarantees that the top-matching pages ( according to the search-engine 's ranking metric ) are always placed at the top of search results , even though we are computing the first batch of answers from the pruned index most of the time .\nThese enhanced pruning techniques and answer-computation algorithms are explored in the context of the cluster architecture commonly employed by today 's search engines .\nFinally , we study and present how search engines can minimize the operational cost of answering queries while providing high quality search results .\nFigure 1 : ( a ) Search engine replicates its full index IF to in\ncrease query-answering capacity .\n( b ) In the 1st tier , small pindexes IP handle most of the queries .\nWhen IP can not answer a query , it is redirected to the 2nd tier , where the full index IF is used to compute the answer .\n2 .\nCLUSTER ARCHITECTURE AND COST SAVINGS FROM A PRUNED INDEX\n2.1 Two-tier index architecture\n2.2 Correctness guarantee under two-tier architecture\n3 .\nOPTIMAL SIZE OF THE P-INDEX\n4 .\nPRUNING POLICIES\n4.1 Assumptions on ranking function\n4.2 Keyword pruning\n4.3 Document pruning\n4.3.1 Global PR-based pruning\n4.3.2 Extended keyword-specific pruning\n5 .\nEXPERIMENTAL EVALUATION\n5.1 Keyword pruning\n5.2 Document pruning\n5.3 Combining keyword and document pruning\n6 .\nRELATED WORK\n[ 3 , 30 ] provide a good overview of inverted indexing in Web search engines and IR systems .\nExperimental studies and analyses of various partitioning schemes for an inverted index are presented in [ 6 , 23 , 33 ] .\nThe pruning algorithms that we have presented in this paper are independent of the partitioning scheme used .\nThe works in [ 1 , 5 , 7 , 20 , 27 ] are the most related to ours , as they describe pruning techniques based on the idea of keeping the postings that contribute the most in the final ranking .\nHowever , [ 1 , 5 , 7 , 27 ] do not consider any query-independent quality ( such as PageRank ) in the ranking function .\n[ 32 ] presents a generic framework for computing approximate top-k answers with some probabilistic bounds on the quality of results .\nOur work essentially extends [ 1 , 2 , 4 , 7 , 20 , 27 , 31 ] by proposing mechanisms for providing the correctness guarantee to the computed top-k results .\nSearch engines use various methods of caching as a means of reducing the cost associated with queries [ 18 , 19 , 21 , 31 ] .\nThis thread of work is also orthogonal to ours because a caching scheme may operate on top of our p-index in order to minimize the answer computation cost .\nThe exact ranking functions employed by current search engines are closely guarded secrets .\nIn general , however , the rankings are based on query-dependent relevance and queryindependent document `` quality . ''\nQuery-dependent relevance can be calculated in a variety of ways ( see [ 3 , 30 ] ) .\nSimilarly , there are a number of works that measure the `` quality '' of the documents , typically as captured through link-based analysis [ 17 , 28 , 26 ] .\nSince our work does not assume a particular form of ranking function , it is complementary to this body of work .\nThere has been a great body of work on top-k result calculation .\nThe main idea is to either stop the traversal of the inverted lists early , or to shrink the lists by pruning postings from the lists [ 14 , 4 , 11 , 8 ] .\nOur proof for the correctness indicator function was primarily inspired by [ 12 ] .\n7 .\nCONCLUDING REMARKS\nWeb search engines typically prune their large-scale inverted indexes in order to scale to enormous query loads .\nWhile this approach may improve performance , by computing the top results from a pruned index we may notice a significant degradation in the result quality .\nIn this paper , we provided a framework for new pruning techniques and answer computation algorithms that guarantee that the top matching pages are always placed at the top of search results in the correct order .\nWe studied two pruning techniques , namely keyword-based and document-based pruning as well as their combination .\nOur experimental results demonstrated that our algorithms can effectively be used to prune an inverted index without degradation in the quality of results .\nIn particular , a keyword-pruned index can guarantee 73 % of the queries with a size of 30 % of the full index , while a document-pruned index can guarantee 68 % of the queries with the same size .\nWhen we combine the two pruning algorithms we can guarantee 60 % of the queries with an index size of 16 % .\nIt is our hope that our work will help search engines develop better , faster and more efficient indexes and thus provide for a better user search experience on the Web .", "lvl-4": "Pruning Policies for Two-Tiered Inverted Index with Correctness Guarantee\nABSTRACT\nThe Web search engines maintain large-scale inverted indexes which are queried thousands of times per second by users eager for information .\nIn order to cope with the vast amounts of query loads , search engines prune their index to keep documents that are likely to be returned as top results , and use this pruned index to compute the first batches of results .\nWhile this approach can improve performance by reducing the size of the index , if we compute the top results only from the pruned index we may notice a significant degradation in the result quality : if a document should be in the top results but was not included in the pruned index , it will be placed behind the results computed from the pruned index .\nGiven the fierce competition in the online search market , this phenomenon is clearly undesirable .\nIn this paper , we study how we can avoid any degradation of result quality due to the pruning-based performance optimization , while still realizing most of its benefit .\nOur contribution is a number of modifications in the pruning techniques for creating the pruned index and a new result computation algorithm that guarantees that the top-matching pages are always placed at the top search results , even though we are computing the first batch from the pruned index most of the time .\nWe also show how to determine the optimal size of a pruned index and we experimentally evaluate our algorithms on a collection of 130 million Web pages .\n1 .\nINTRODUCTION\nAccording to a recent study [ 13 ] , it is estimated that the \u2217 Work done while author was at UCLA Computer Science Department .\n\u2020 This work is partially supported by NSF grants , IIS-0534784 , IIS0347993 , and CNS-0626702 .\nDue to this immense amount of available information , the users are becoming more and more dependent on the Web search engines for locating relevant information on the Web .\nTypically , the Web search engines , similar to other information retrieval applications , utilize a data structure called inverted index .\nAn inverted index provides for the efficient retrieval of the documents ( or Web pages ) that contain a particular keyword .\nIn most cases , a query that the user issues may have thousands or even millions of matching documents .\nIn order to avoid overwhelming the users with a huge amount of results , the search engines present the results in batches of 10 to 20 relevant documents .\nThe user then looks through the first batch of results and , if she does n't find the answer she is looking for , she may potentially request to view the next batch or decide to issue a new query .\nA recent study [ 16 ] indicated that approximately 80 % of the users examine at most the first 3 batches of the results .\nThat is , 80 % of the users typically view at most 30 to 60 results for every query that they issue to a search engine .\nAt the same time , given the size of the Web , the inverted index that the search engines maintain can grow very large .\nOne natural solution to this problem is to create a small index on a subset of the documents that are likely to be returned as the top results ( by using , for example , the pruning techniques in [ 7 , 20 ] ) and compute the first batch of answers using the pruned index .\nWhile this approach has been shown to give significant improvement in performance , it also leads to noticeable degradation in the quality of the search results , because the top answers are computed only from the pruned index [ 7 , 20 ] .\nThat is , even if a page should be placed as the top-matching page according to a search engine 's ranking metric , the page may be placed behind the ones contained in the pruned index if the page did not become part of the pruned index for various reasons [ 7 , 20 ] .\nGiven the fierce competition among search engines today this degradation is clearly undesirable and needs to be addressed if possible .\nIn this paper , we study how we can avoid any degradation of search quality due to the above performance optimization while still realizing most of its benefit .\nThat is , we present a number of simple ( yet important ) changes in the pruning techniques for creating the pruned index .\nOur main contribution is a new answer computation algorithm that guarantees that the top-matching pages ( according to the search-engine 's ranking metric ) are always placed at the top of search results , even though we are computing the first batch of answers from the pruned index most of the time .\nThese enhanced pruning techniques and answer-computation algorithms are explored in the context of the cluster architecture commonly employed by today 's search engines .\nFinally , we study and present how search engines can minimize the operational cost of answering queries while providing high quality search results .\nFigure 1 : ( a ) Search engine replicates its full index IF to in\ncrease query-answering capacity .\n( b ) In the 1st tier , small pindexes IP handle most of the queries .\nWhen IP can not answer a query , it is redirected to the 2nd tier , where the full index IF is used to compute the answer .\n6 .\nRELATED WORK\n[ 3 , 30 ] provide a good overview of inverted indexing in Web search engines and IR systems .\nExperimental studies and analyses of various partitioning schemes for an inverted index are presented in [ 6 , 23 , 33 ] .\nThe pruning algorithms that we have presented in this paper are independent of the partitioning scheme used .\nHowever , [ 1 , 5 , 7 , 27 ] do not consider any query-independent quality ( such as PageRank ) in the ranking function .\n[ 32 ] presents a generic framework for computing approximate top-k answers with some probabilistic bounds on the quality of results .\nOur work essentially extends [ 1 , 2 , 4 , 7 , 20 , 27 , 31 ] by proposing mechanisms for providing the correctness guarantee to the computed top-k results .\nSearch engines use various methods of caching as a means of reducing the cost associated with queries [ 18 , 19 , 21 , 31 ] .\nThis thread of work is also orthogonal to ours because a caching scheme may operate on top of our p-index in order to minimize the answer computation cost .\nThe exact ranking functions employed by current search engines are closely guarded secrets .\nIn general , however , the rankings are based on query-dependent relevance and queryindependent document `` quality . ''\nSimilarly , there are a number of works that measure the `` quality '' of the documents , typically as captured through link-based analysis [ 17 , 28 , 26 ] .\nSince our work does not assume a particular form of ranking function , it is complementary to this body of work .\nThere has been a great body of work on top-k result calculation .\n7 .\nCONCLUDING REMARKS\nWeb search engines typically prune their large-scale inverted indexes in order to scale to enormous query loads .\nWhile this approach may improve performance , by computing the top results from a pruned index we may notice a significant degradation in the result quality .\nIn this paper , we provided a framework for new pruning techniques and answer computation algorithms that guarantee that the top matching pages are always placed at the top of search results in the correct order .\nWe studied two pruning techniques , namely keyword-based and document-based pruning as well as their combination .\nOur experimental results demonstrated that our algorithms can effectively be used to prune an inverted index without degradation in the quality of results .\nIn particular , a keyword-pruned index can guarantee 73 % of the queries with a size of 30 % of the full index , while a document-pruned index can guarantee 68 % of the queries with the same size .\nWhen we combine the two pruning algorithms we can guarantee 60 % of the queries with an index size of 16 % .\nIt is our hope that our work will help search engines develop better , faster and more efficient indexes and thus provide for a better user search experience on the Web .", "lvl-2": "Pruning Policies for Two-Tiered Inverted Index with Correctness Guarantee\nABSTRACT\nThe Web search engines maintain large-scale inverted indexes which are queried thousands of times per second by users eager for information .\nIn order to cope with the vast amounts of query loads , search engines prune their index to keep documents that are likely to be returned as top results , and use this pruned index to compute the first batches of results .\nWhile this approach can improve performance by reducing the size of the index , if we compute the top results only from the pruned index we may notice a significant degradation in the result quality : if a document should be in the top results but was not included in the pruned index , it will be placed behind the results computed from the pruned index .\nGiven the fierce competition in the online search market , this phenomenon is clearly undesirable .\nIn this paper , we study how we can avoid any degradation of result quality due to the pruning-based performance optimization , while still realizing most of its benefit .\nOur contribution is a number of modifications in the pruning techniques for creating the pruned index and a new result computation algorithm that guarantees that the top-matching pages are always placed at the top search results , even though we are computing the first batch from the pruned index most of the time .\nWe also show how to determine the optimal size of a pruned index and we experimentally evaluate our algorithms on a collection of 130 million Web pages .\n1 .\nINTRODUCTION\nThe amount of information on the Web is growing at a prodigious rate [ 24 ] .\nAccording to a recent study [ 13 ] , it is estimated that the \u2217 Work done while author was at UCLA Computer Science Department .\n\u2020 This work is partially supported by NSF grants , IIS-0534784 , IIS0347993 , and CNS-0626702 .\nAny opinions , findings , and conclusions or recommendations expressed in this material are those of the author ( s ) and do not necessarily reflect the views of the funding institutions .\nWeb currently consists of more than 11 billion pages .\nDue to this immense amount of available information , the users are becoming more and more dependent on the Web search engines for locating relevant information on the Web .\nTypically , the Web search engines , similar to other information retrieval applications , utilize a data structure called inverted index .\nAn inverted index provides for the efficient retrieval of the documents ( or Web pages ) that contain a particular keyword .\nIn most cases , a query that the user issues may have thousands or even millions of matching documents .\nIn order to avoid overwhelming the users with a huge amount of results , the search engines present the results in batches of 10 to 20 relevant documents .\nThe user then looks through the first batch of results and , if she does n't find the answer she is looking for , she may potentially request to view the next batch or decide to issue a new query .\nA recent study [ 16 ] indicated that approximately 80 % of the users examine at most the first 3 batches of the results .\nThat is , 80 % of the users typically view at most 30 to 60 results for every query that they issue to a search engine .\nAt the same time , given the size of the Web , the inverted index that the search engines maintain can grow very large .\nSince the users are interested in a small number of results ( and thus are viewing a small portion of the index for every query that they issue ) , using an index that is capable of returning all the results for a query may constitute a significant waste in terms of time , storage space and computational resources , which is bound to get worse as the Web grows larger over time [ 24 ] .\nOne natural solution to this problem is to create a small index on a subset of the documents that are likely to be returned as the top results ( by using , for example , the pruning techniques in [ 7 , 20 ] ) and compute the first batch of answers using the pruned index .\nWhile this approach has been shown to give significant improvement in performance , it also leads to noticeable degradation in the quality of the search results , because the top answers are computed only from the pruned index [ 7 , 20 ] .\nThat is , even if a page should be placed as the top-matching page according to a search engine 's ranking metric , the page may be placed behind the ones contained in the pruned index if the page did not become part of the pruned index for various reasons [ 7 , 20 ] .\nGiven the fierce competition among search engines today this degradation is clearly undesirable and needs to be addressed if possible .\nIn this paper , we study how we can avoid any degradation of search quality due to the above performance optimization while still realizing most of its benefit .\nThat is , we present a number of simple ( yet important ) changes in the pruning techniques for creating the pruned index .\nOur main contribution is a new answer computation algorithm that guarantees that the top-matching pages ( according to the search-engine 's ranking metric ) are always placed at the top of search results , even though we are computing the first batch of answers from the pruned index most of the time .\nThese enhanced pruning techniques and answer-computation algorithms are explored in the context of the cluster architecture commonly employed by today 's search engines .\nFinally , we study and present how search engines can minimize the operational cost of answering queries while providing high quality search results .\nFigure 1 : ( a ) Search engine replicates its full index IF to in\ncrease query-answering capacity .\n( b ) In the 1st tier , small pindexes IP handle most of the queries .\nWhen IP can not answer a query , it is redirected to the 2nd tier , where the full index IF is used to compute the answer .\n2 .\nCLUSTER ARCHITECTURE AND COST SAVINGS FROM A PRUNED INDEX\nTypically , a search engine downloads documents from the Web and maintains a local inverted index that is used to answer queries quickly .\nInverted indexes .\nAssume that we have collected a set of documents D = { D1 , ... , DM } and that we have extracted all the terms T = { t1 , ... , tn } from the documents .\nFor every single term ti \u2208 T we maintain a list I ( ti ) of document IDs that contain ti .\nEvery entry in I ( ti ) is called a posting and can be extended to include additional information , such as how many times ti appears in a document , the positions of ti in the document , whether ti is bold/italic , etc. .\nThe set of all the lists I = { I ( t1 ) , ... , I ( tn ) } is our inverted index .\n2.1 Two-tier index architecture\nSearch engines are accepting an enormous number of queries every day from eager users searching for relevant information .\nFor example , Google is estimated to answer more than 250 million user queries per day .\nIn order to cope with this huge query load , search engines typically replicate their index across a large cluster of machines as the following example illustrates : Example 1 Consider a search engine that maintains a cluster of machines as in Figure 1 ( a ) .\nThe size of its full inverted index IF is larger than what can be stored in a single machine , so each copy of IF is stored across four different machines .\nWe also suppose that one copy of IF can handle the query load of 1000 queries/sec .\nAssuming that the search engine gets 5000 queries/sec , it needs to replicate IF five times to handle the load .\nOverall , the search engine needs to maintain 4 \u00d7 5 = 20 machines in its cluster .\n\u2751 While fully replicating the entire index IF multiple times is a straightforward way to scale to a large number of queries , typical query loads at search engines exhibit certain localities , allowing for significant reduction in cost by replicating only a small portion of the full index .\nIn principle , this is typically done by pruning a full index IF to create a smaller , pruned index ( or p-index ) IP , which contains a subset of the documents that are likely to be returned as top results .\nGiven the p-index , search engines operate by employing a twotier index architecture as we show in Figure 1 ( b ) : All incoming queries are first directed to one of the p-indexes kept in the 1st tier .\nIn the cases where a p-index can not compute the answer ( e.g. was unable to find enough documents to return to the user ) the query is answered by redirecting it to the 2nd tier , where we maintain a full index IF .\nThe following example illustrates the potential reduction in the query-processing cost by employing this two-tier index architecture .\nExample 2 Assume the same parameter settings as in Example 1 .\nThat is , the search engine gets a query load of 5000 queries/sec\n( 2 ) If ( C = 1 ) Then ( 3 ) Return A ( 4 ) Else ( 5 ) A = ComputeAnswer ( q , IF ) ( 6 ) Return A\nFigure 2 : Computing the answer under the two-tier architecture with the result correctness guarantee .\nand every copy of an index ( both the full IF and p-index IP ) can handle up to 1000 queries/sec .\nAlso assume that the size of IP is one fourth of IF and thus can be stored on a single machine .\nFinally , suppose that the p-indexes can handle 80 % of the user queries by themselves and only forward the remaining 20 % queries to IF .\nUnder this setting , since all 5000/sec user queries are first directed to a p-index , five copies of IP are needed in the 1st tier .\nFor the 2nd tier , since 20 % ( or 1000 queries/sec ) are forwarded , we need to maintain one copy of IF to handle the load .\nOverall we need a total of 9 machines ( five machines for the five copies of IP and four machines for one copy of IF ) .\nCompared to Example 1 , this is more than 50 % reduction in the number of machines .\n\u2751 The above example demonstrates the potential cost saving achieved by using a p-index .\nHowever , the two-tier architecture may have a significant drawback in terms of its result quality compared to the full replication of IF ; given the fact that the p-index contains only a subset of the data of the full index , it is possible that , for some queries , the p-index may not contain the top-ranked document according to the particular ranking criteria used by the search engine and fail to return it as the top page , leading to noticeable quality degradation in search results .\nGiven the fierce competition in the online search market , search engine operators desperately try to avoid any reduction in search quality in order to maximize user satisfaction .\n2.2 Correctness guarantee under two-tier architecture\nHow can we avoid the potential degradation of search quality under the two-tier architecture ?\nOur basic idea is straightforward : We use the top-k result from the p-index only if we know for sure that the result is the same as the top-k result from the full index .\nThe algorithm in Figure 2 formalizes this idea .\nIn the algorithm , when we compute the result from IP ( Step 1 ) , we compute not only the top-k result A , but also the correctness indicator function C defined as follows : Definition 1 ( Correctness indicator function ) Given a query q , the p-index IP returns the answer A together with a correctness indicator function C. C is set to 1 if A is guaranteed to be identical ( i.e. same results in the same order ) to the result computed from the full index IF .\nIf it is possible that A is different , C is set to 0 .\n\u2751 Note that the algorithm returns the result from IP ( Step 3 ) only when it is identical to the result from IF ( condition C = 1 in Step 2 ) .\nOtherwise , the algorithm recomputes and returns the result from the full index IF ( Step 5 ) .\nTherefore , the algorithm is guaranteed to return the same result as the full replication of IF all the time .\nNow , the real challenge is to find out ( 1 ) how we can compute the correctness indicator function C and ( 2 ) how we should prune the index to make sure that the majority of queries are handled by IP alone .\nA straightforward way to calculate C is to compute the top-k answer both from IP and IF and compare them .\nThis naive solution , however , incurs a cost even higher than the full replication of IF because the answers are computed twice : once from IP and once from IF .\nIs there any way to compute the correctness indicator function C only from IP without computing the answer from IF ?\nQuestion 2 How should we prune IF to IP to realize the maximum cost saving ?\nThe effectiveness of Algorithm 2.1 critically depends on how often the correctness indicator function C is evaluated to be 1 .\nIf C = 0 for all queries , for example , the answers to all queries will be computed twice , once from IP ( Step 1 ) and once from IF ( Step 5 ) , so the performance will be worse than the full replication of IF .\nWhat will be the optimal way to prune IF to IP , such that C = 1 for a large fraction of queries ?\nIn the next few sections , we try to address these questions .\n3 .\nOPTIMAL SIZE OF THE P-INDEX\nIntuitively , there exists a clear tradeoff between the size of IP and the fraction of queries that IP can handle : When IP is large and has more information , it will be able to handle more queries , but the cost for maintaining and looking up IP will be higher .\nWhen IP is small , on the other hand , the cost for IP will be smaller , but more queries will be forwarded to IF , requiring us to maintain more copies of IF .\nGiven this tradeoff , how should we determine the optimal size of IP in order to maximize the cost saving ?\nTo find the answer , we start with a simple example .\nExample 3 Again , consider a scenario similar to Example 1 , where the query load is 5000 queries/sec , each copy of an index can handle 1000 queries/sec , and the full index spans across 4 machines .\nBut now , suppose that if we prune IF by 75 % to IP , ( i.e. , the size of IP , is 25 % of IF ) , IP , can handle 40 % of the queries ( i.e. , C = 1 for 40 % of the queries ) .\nAlso suppose that if IF is pruned by 50 % to IP 2 , IP 2 can handle 80 % of the queries .\nWhich one of the IP , , IP 2 is preferable for the 1st-tier index ?\nneeded when we use IP , for the 1st tier .\nAt the 1st tier , we need 5 To find out the answer , we first compute the number of machines copies of IP , to handle the query load of 5000 queries/sec .\nSince the size of IP , is 25 % of IF ( that requires 4 machines ) , one copy of IP , requires one machine .\nTherefore , the total number of machines required for the 1st tier is 5 \u00d7 1 = 5 ( 5 copies of IP , with 1 machine per copy ) .\nAlso , since IP , can handle 40 % of the queries , the 2nd tier has to handle 3000 queries/sec ( 60 % of the 5000 queries/sec ) , so we need a total of 3 \u00d7 4 = 12 machines for the 2nd tier ( 3 copies of IF with 4 machines per copy ) .\nOverall , when we use IP , for the 1st tier , we need 5 + 12 = 17 machines to handle the load .\nWe can do similar analysis when we use IP 2 and see that a total of 14 machines are needed when IP 2 is used .\nGiven this result , we can conclude that using IP 2 is preferable .\n\u2751 The above example shows that the cost of the two-tier architecture depends on two important parameters : the size of the p-index and the fraction of the queries that can be handled by the 1st tier index alone .\nWe use s to denote the size of the p-index relative to IF ( i.e. , if s = 0.2 , for example , the p-index is 20 % of the size of IF ) .\nWe use f ( s ) to denote the fraction of the queries that a p-index of size s can handle ( i.e. , if f ( s ) = 0.3 , 30 % of the queries return the value C = 1 from IP ) .\nIn general , we can expect that f ( s ) will increase as s gets larger because IP can handle more queries as its size grows .\nIn Figure 3 , we show an example graph of f ( s ) over s. Given the notation , we can state the problem of p-index-size optimization as follows .\nIn formulating the problem , we assume that the number of machines required to operate a two-tier architecture\nFigure 3 : Example function showing the fraction of guaranteed\nqueries f ( s ) at a given size s of the p-index .\nis roughly proportional to the total size of the indexes necessary to handle the query load .\nProblem 1 ( Optimal index size ) Given a query load Q and the function f ( s ) , find the optimal p-index size s that minimizes the total size of the indexes necessary to handle the load Q. \u2751 The following theorem shows how we can determine the optimal index size .\nTheorem 1 The cost for handling the query load Q is minimal when the size of the p-index , s , satisfies df ( s ) d s = 1 .\n\u2751 Proof The proof of this and the following theorems is omitted due to space constraints .\nThis theorem shows that the optimal point is when the slope of the f ( s ) curve is 1 .\nFor example , in Figure 3 , the optimal size is when s = 0.16 .\nNote that the exact shape of the f ( s ) graph may vary depending on the query load and the pruning policy .\nFor example , even for the same p-index , if the query load changes significantly , fewer ( or more ) queries may be handled by the p-index , decreasing ( or increasing ) f ( s ) .\nSimilarly , if we use an effective pruning policy , more queries will be handled by IP than when we use an ineffective pruning policy , increasing f ( s ) .\nTherefore , the function f ( s ) and the optimal-index size may change significantly depending on the query load and the pruning policy .\nIn our later experiments , however , we find that even though the shape of the f ( s ) graph changes noticeably between experiments , the optimal index size consistently lies between 10 % -- 30 % in most experiments .\n4 .\nPRUNING POLICIES\nIn this section , we show how we should prune the full index IF to IP , so that ( 1 ) we can compute the correctness indicator function C from IP itself and ( 2 ) we can handle a large fraction of queries by IP .\nIn designing the pruning policies , we note the following two localities in the users ' search behavior :\n1 .\nKeyword locality : Although there are many different words in the document collection that the search engine indexes , a few popular keywords constitute the majority of the query loads .\nThis keyword locality implies that the search engine will be able to answer a significant fraction of user queries even if it can handle only these few popular keywords .\n2 .\nDocument locality : Even if a query has millions of matching documents , users typically look at only the first few results [ 16 ] .\nThus , as long as search engines can compute the first few top-k answers correctly , users often will not notice that the search engine actually has not computed the correct answer for the remaining results ( unless the users explicitly request them ) .\nBased on the above two localities , we now investigate two different types of pruning policies : ( 1 ) a keyword pruning policy , which takes advantage of the keyword locality by pruning the whole inverted list I ( ti ) for unpopular keywords ti 's and ( 2 ) a document pruning policy , which takes advantage of the document locality by keeping only a few postings in each list I ( ti ) , which are likely to be included in the top-k results .\nAs we discussed before , we need to be able to compute the correctness indicator function from the pruned index alone in order to provide the correctness guarantee .\nSince the computation of correctness indicator function may critically depend on the particular ranking function used by a search engine , we first clarify our assumptions on the ranking function .\n4.1 Assumptions on ranking function\nConsider a query q = ft1 , t2 , ... , tw } that contains a subset of the index terms .\nThe goal of the search engine is to return the documents that are most relevant to query q .\nThis is done in two steps : first we use the inverted index to find all the documents that contain the terms in the query .\nSecond , once we have the relevant documents , we calculate the rank ( or score ) of each one of the documents with respect to the query and we return to the user the documents that rank the highest .\nMost of the major search engines today return documents containing all query terms ( i.e. they use AND-semantics ) .\nIn order to make our discussions more concise , we will also assume the popular AND-semantics while answering a query .\nIt is straightforward to extend our results to OR-semantics as well .\nThe exact ranking function that search engines employ is a closely guarded secret .\nWhat is known , however , is that the factors in determining the document ranking can be roughly categorized into two classes : Query-dependent relevance .\nThis particular factor of relevance captures how relevant the query is to every document .\nAt a high level , given a document D , for every term ti a search engine assigns a term relevance score tr ( D , ti ) to D. Given the tr ( D , ti ) scores for every ti , then the query-dependent relevance of D to the query , noted as tr ( D , q ) , can be computed by combining the individual term relevance values .\nOne popular way for calculating the query -- dependent relevance is to represent both the document D and the query q using the TF.IDF vector space model [ 29 ] and employ a cosine distance metric .\nSince the exact form of tr ( D , ti ) and tr ( D , q ) differs depending on the search engine , we will not restrict to any particular form ; instead , in order to make our work applicable in the general case , we will make the generic assumption that the query-dependent relevance is computed as a function of the individual term relevance values in the query :\nQuery-independent document quality .\nThis is a factor that measures the overall `` quality '' of a document D independent of the particular query issued by the user .\nPopular techniques that compute the general quality of a page include PageRank [ 26 ] , HITS [ 17 ] and the likelihood that the page is a `` spam '' page [ 25 , 15 ] .\nHere , we will use pr ( D ) to denote this query-independent part of the final ranking function for document D .\nThe final ranking score r ( D , q ) of a document will depend on both the query-dependent and query-independent parts of the ranking function .\nThe exact combination of these parts may be done in a variety of ways .\nIn general , we can assume that the final ranking score of a document is a function of its query-dependent and query-independent relevance scores .\nMore formally :\nFor example , fr ( tr ( D , q ) , pr ( D ) ) may take the form fr ( tr ( D , q ) , pr ( D ) ) = \u03b1 \u2022 tr ( D , q ) + ( 1 -- \u03b1 ) \u2022 pr ( D ) , thus giving weight \u03b1 to the query-dependent part and the weight 1 -- \u03b1 to the query-independent part .\nIn Equations 1 and 2 the exact form of fr and ftr can vary depending on the search engine .\nTherefore , to make our discussion applicable independent of the particular ranking function used by search engines , in this paper , we will make only the generic assumption that the ranking function r ( D , q ) is monotonic on its parameters tr ( D , t1 ) , .\n.\n.\n, tr ( D , tw ) and pr ( D ) .\nFigure 4 : Keyword and document pruning .\nFigure 5 : Result guarantee in keyword pruning .\nDefinition 2 A function f ( \u03b1 , \u03b2 , ... , \u03c9 ) is monotonic if t1\u03b11 > \u03b12 , t1\u03b21 > \u03b22 , ... t1\u03c91 >\nRoughly , the monotonicity of the ranking function implies that , between two documents D1 and D2 , if D1 has higher querydependent relevance than D2 and also a higher query-independent score than D2 , then D1 should be ranked higher than D2 , which we believe is a reasonable assumption in most practical settings .\n4.2 Keyword pruning\nGiven our assumptions on the ranking function , we now investigate the `` keyword pruning '' policy , which prunes the inverted index IF `` horizontally '' by removing the whole I ( ti ) 's corresponding to the least frequent terms .\nIn Figure 4 we show a graphical representation of keyword pruning , where we remove the inverted lists for t3 and t5 , assuming that they do not appear often in the query load .\nNote that after keyword pruning , if all keywords ft1 , ... , tn } in the query q appear in IP , the p-index has the same information as IF as long as q is concerned .\nIn other words , if all keywords in q appear in IP , the answer computed from IP is guaranteed to be the same as the answer computed from IF .\nFigure 5 formalizes this observation and computes the correctness indicator function C for a keyword-pruned index IP .\nIt is straightforward to prove that the answer from IP is identical to that from IF if C = 1 in the above algorithm .\nWe now consider the issue of optimizing the IP such that it can handle the largest fraction of queries .\nThis problem can be formally stated as follows : Problem 2 ( Optimal keyword pruning ) Given the query load Q and a goal index size s \u2022 IIF I for the pruned index , select the inverted lists IP = fI ( t1 ) , .\n.\n.\n, I ( th ) } such that IIP I < s \u2022 IIF I and the fraction of queries that IP can answer ( expressed by f ( s ) ) is maximized .\nUnfortunately , the optimal solution to the above problem is intractable as we can show by reducing from knapsack ( we omit the complete proof ) .\nTheorem 2 The problem of calculating the optimal keyword pruning is NP-hard .\n\u2737 Given the intractability of the optimal solution , we need to resort to an approximate solution .\nA common approach for similar knapsack problems is to adopt a greedy policy by keeping the items with the maximum benefit per unit cost [ 9 ] .\nIn our context , the potential benefit of an inverted list I ( ti ) is the number of queries that can be answered by IP when I ( ti ) is included in IP .\nWe approximate this number by the fraction of queries in the query load Q that include the term ti and represent it as P ( ti ) .\nFor example , if 100 out of 1000 queries contain the term computer ,\nFigure 6 : Approximation algorithm for the optimal keyword pruning .\nFigure 7 : Global document pruning based on pr .\nthen P ( computer ) = 0.1 .\nThe cost of including I ( ti ) in the pindex is its size | I ( ti ) | .\nThus , in our greedy approach in Figure 6 , we include I ( ti ) 's in the decreasing order of P ( ti ) / | I ( ti ) | as long as | IP | < s \u00b7 | IF | .\nLater in our experiment section , we evaluate what fraction of queries can be handled by IP when we employ this greedy keyword-pruning policy .\n4.3 Document pruning\nAt a high level , document pruning tries to take advantage of the observation that most users are mainly interested in viewing the top few answers to a query .\nGiven this , it is unnecessary to keep all postings in an inverted list I ( ti ) , because users will not look at most of the documents in the list anyway .\nWe depict the conceptual diagram of the document pruning policy in Figure 4 .\nIn the figure , we `` vertically prune '' postings corresponding to D4 , D5 and D6 of t1 and D8 of t3 , assuming that these documents are unlikely to be part of top-k answers to user queries .\nAgain , our goal is to develop a pruning policy such that ( 1 ) we can compute the correctness indicator function C from IP alone and ( 2 ) we can handle the largest fraction of queries with IP .\nIn the next few sections , we discuss a few alternative approaches for document pruning .\n4.3.1 Global PR-based pruning\nWe first investigate the pruning policy that is commonly used by existing search engines .\nThe basic idea for this pruning policy is that the query-independent quality score pr ( D ) is a very important factor in computing the final ranking of the document ( e.g. PageRank is known to be one of the most important factors determining the overall ranking in the search results ) , so we build the p-index by keeping only those documents whose pr values are high ( i.e. , pr ( D ) > \u03c4p for a threshold value \u03c4p ) .\nThe hope is that most of the top-ranked results are likely to have high pr ( D ) values , so the answer computed from this p-index is likely to be similar to the answer computed from the full index .\nFigure 7 describes this pruning policy more formally , where we sort all documents Di 's by their respective pr ( Di ) values and keep a Di in the p-index when its\n( 1 ) Foreach I ( ti ) E IF ( 2 ) Sort Di 's in I ( ti ) based on pr ( Di ) ( 3 ) If II ( ti ) I < N Then keep all Di 's ( 4 ) Else keep the top-N Di 's with the highest pr ( Di )\nFigure 8 : Local document pruning based on pr .\nFigure 9 : Extended keyword-specific document pruning based on pr and tr .\npr ( Di ) value is higher than the global threshold value \u03c4p .\nWe refer to this pruning policy as global PR-based pruning ( GPR ) .\nVariations of this pruning policy are possible .\nFor example , we may adjust the threshold value \u03c4p locally for each inverted list I ( ti ) , so that we maintain at least a certain number of postings for each inverted list I ( ti ) .\nThis policy is shown in Figure 8 .\nWe refer to this pruning policy as local PR-based pruning ( LPR ) .\nUnfortunately , the biggest shortcoming of this policy is that we can prove that we can not compute the correctness function C from IP alone when IP is constructed this way .\nProof Assume we create IP based on the GPR policy ( generalizing the proof to LPR is straightforward ) and that every document D with pr ( D ) > \u03c4p is included in IP .\nAssume that the kth entry in the top-k results , has a ranking score of r ( Dk , q ) = fr ( tr ( Dk , q ) , pr ( Dk ) ) .\nNow consider another document Dj that was pruned from IP because pr ( Dj ) < \u03c4p .\nEven so , it is still possible that the document 's tr ( Dj , q ) value is very high such that r ( Dj , q ) = fr ( tr ( Dj , q ) , pr ( Dj ) ) > r ( Dk , q ) .\n\u25a0 Therefore , under a PR-based pruning policy , the quality of the answer computed from IP can be significantly worse than that from IF and it is not possible to detect this degradation without computing the answer from IF .\nIn the next section , we propose simple yet essential changes to this pruning policy that allows us to compute the correctness function C from IP alone .\n4.3.2 Extended keyword-specific pruning\nThe main problem of global PR-based document pruning policies is that we do not know the term-relevance score tr ( D , ti ) of the pruned documents , so a document not in IP may have a higher ranking score than the ones returned from IP because of their high tr scores .\nHere , we propose a new pruning policy , called extended keyword-specific document pruning ( EKS ) , which avoids this problem by pruning not just based on the query-independent pr ( D ) score but also based on the term-relevance tr ( D , ti ) score .\nThat is , for every inverted list I ( ti ) , we pick two threshold values , \u03c4pi for pr and \u03c4ti for tr , such that if a document D E I ( ti ) satisfies pr ( D ) > \u03c4pi or tr ( D , ti ) > \u03c4ti , we include it in I ( ti ) of IP .\nOtherwise , we prune it from IP .\nFigure 9 formally describes this algorithm .\nThe threshold values , \u03c4pi and \u03c4ti , may be selected in a number of different ways .\nFor example , if pr and tr have equal weight in the final ranking and if we want to keep at most N postings in each inverted list I ( ti ) , we may want to set the two threshold values equal to \u03c4i ( \u03c4pi = \u03c4ti = \u03c4i ) and adjust \u03c4i such that N postings remain in I ( ti ) .\nThis new pruning policy , when combined with a monotonic scoring function , enables us to compute the correctness indicator function C from the pruned index .\nWe use the following example to explain how we may compute C. Example 4 Consider the query q = ft1 , t2 } and a monotonic ranking function , f ( pr ( D ) , tr ( D , t1 ) , tr ( D , t2 ) ) .\nThere are three possible scenarios on how a document D appears in the pruned index IP .\n1 .\nD appears in both I ( t1 ) and I ( t2 ) of IP : Since complete information of D appears in IP , we can compute the exact\nFigure 10 : Ranking based on thresholds tr\u03c4 ( ti ) and pr\u03c4 ( ti ) .\nscore of D based on pr ( D ) , tr ( D , t1 ) and tr ( D , t2 ) values in IP : f ( pr ( D ) , tr ( D , t1 ) , tr ( D , t2 ) ) .\n2 .\nD appears only in I ( t1 ) but not in I ( t2 ) : Since D does not appear in I ( t2 ) , we do not know tr ( D , t2 ) , so we can not compute its exact ranking score .\nHowever , from our pruning criteria , we know that tr ( D , t2 ) can not be larger than the threshold value \u03c4t2 .\nTherefore , from the monotonicity of f ( Definition 2 ) , we know that the ranking score of D , f ( pr ( D ) , tr ( D , t1 ) , tr ( D , t2 ) ) , can not be larger than f ( pr ( D ) , tr ( D , t1 ) , \u03c4t2 ) .\n3 .\nD does not appear in any list : Since D does not appear at all in IP , we do not know any of the pr ( D ) , tr ( D , t1 ) , tr ( D , t2 ) values .\nHowever , from our pruning criteria , we know that pr ( D ) < \u03c4p1 and < \u03c4p2 and that tr ( D , t1 ) < \u03c4t1 and tr ( D , t2 ) < \u03c4t2 .\nTherefore , from the monotonicity of f , we know that the ranking score of D , can not be larger than\nThe above example shows that when a document does not appear in one of the inverted lists I ( ti ) with ti E q , we can not compute its exact ranking score , but we can still compute its upper bound score by using the threshold value \u03c4ti for the missing values .\nThis suggests the algorithm in Figure 10 that computes the top-k result A from IP together with the correctness indicator function C .\nIn the algorithm , the correctness indicator function C is set to one only if all documents in the top-k result A appear in all inverted lists I ( ti ) with ti E q , so we know their exact score .\nIn this case , because these documents have scores higher than the upper bound scores of any other documents , we know that no other documents can appear in the top-k .\nThe following theorem formally proves the correctness of the algorithm .\nIn [ 11 ] Fagin et al. , provides a similar proof in the context of multimedia middleware .\nTheorem 4 Given an inverted index IP pruned by the algorithm in Figure 9 , a query q = ft1 , ... , tw } and a monotonic ranking function , the top-k result from IP computed by Algorithm 4.6 is the same as the top-k result from IF if C = 1 .\n\u2751 Proof Let us assume Dk is the kth ranked document computed from IP according to Algorithm 4.6 .\nFor every document Di E IF that is not in the top-k result from IP , there are two possible scenarios : First , Di is not in the final answer because it was pruned from all inverted lists I ( tj ) , 1 < j < w , in IP .\nIn this case , we know that pr ( Di ) < min1 !\n5j !\n5w\u03c4pj < pr ( Dk ) and that tr ( Di , tj ) < \u03c4tj < tr ( Dk , tj ) , 1 < j < w. From the monotonicity assumption , it follows that the ranking score of DI is r ( Di ) < r ( Dk ) .\nThat is , Di 's score can never be larger than that of Dk .\nSecond , Di is not in the answer because Di is pruned from some inverted lists , say , I ( t1 ) , ... , I ( tm ) , in IP .\nLet us assume \u00af r ( Di ) = f ( pr ( Di ) , \u03c4t1 , ... , \u03c4tm , tr ( Di , tm +1 ) , ... , tr ( Di , tw ) ) .\nThen , from tr ( Di , tj ) < \u03c4tj ( 1 < j < m ) and the monotonicity assumption , Fraction of queries guaranteed per fraction of index\nFigure 11 : Fraction of guaranteed queries f ( s ) answered in a keyword-pruned p-index of size s.\nwe know that r ( Di ) < \u00af r ( Di ) .\nAlso , Algorithm 4.6 sets C = 1 only when the top-k documents have scores larger than \u00af r ( Di ) .\nTherefore , r ( Di ) can not be larger than r ( Dk ) .\n0\n5 .\nEXPERIMENTAL EVALUATION\nIn order to perform realistic tests for our pruning policies , we implemented a search engine prototype .\nFor the experiments in this paper , our search engine indexed about 130 million pages , crawled from the Web during March of 2004 .\nThe crawl started from the Open Directory 's [ 10 ] homepage and proceeded in a breadth-first manner .\nOverall , the total uncompressed size of our crawled Web pages is approximately 1.9 TB , yielding a full inverted index IF of approximately 1.2 TB .\nFor the experiments reported in this section we used a real set of queries issued to Looksmart [ 22 ] on a daily basis during April of 2003 .\nAfter keeping only the queries containing keywords that were present in our inverted index , we were left with a set of about 462 million queries .\nWithin our query set , the average number of terms per query is 2 and 98 % of the queries contain at most 5 terms .\nSome experiments require us to use a particular ranking function .\nFor these , we use the ranking function similar to the one used in [ 20 ] .\nMore precisely , our ranking function r ( D , q ) is\nwhere prnorm ( D ) is the normalized PageRank of D computed from the downloaded pages and trnorm ( D , q ) is the normalized TF.IDF cosine distance of D to q .\nThis function is clearly simpler than the real functions employed by commercial search engines , but we believe for our evaluation this simple function is adequate , because we are not studying the effectiveness of a ranking function , but the effectiveness of pruning policies .\n5.1 Keyword pruning\nIn our first experiment we study the performance of the keyword pruning , described in Section 4.2 .\nMore specifically , we apply the algorithm HS of Figure 6 to our full index IF and create a keyword-pruned p-index IP of size s. For the construction of our keyword-pruned p-index we used the query frequencies observed during the first 10 days of our data set .\nThen , using the remaining 20-day query load , we measured f ( s ) , the fraction of queries handled by IP .\nAccording to the algorithm of Figure 5 , a query can be handled by IP ( i.e. , C = 1 ) if IP includes the inverted lists for all of the query 's keywords .\nWe have repeated the experiment for varying values of s , picking the keywords greedily as discussed in Section 4.2 .\nThe result is shown in Figure 11 .\nThe horizontal axis denotes the size s of the p-index as a fraction of the size of IF .\nThe vertical axis shows the fraction f ( s ) of the queries that the p-index of size s can answer .\nThe results of Figure 11 , are very encouraging : we can answer a significant fraction of the queries with a small fraction of the original index .\nFor example , approximately 73 % of the queries can be answered using 30 % of the original index .\nAlso , we find that when we use the keyword pruning policy only , the optimal index size is\nFigure 12 : Fraction of guaranteed queries f ( s ) answered in a document-pruned p-index of size s. Figure 13 : Fraction of queries answered in a document-pruned p-index of size s.\n5.2 Document pruning\nWe continue our experimental evaluation by studying the performance of the various document pruning policies described in Section 4.3 .\nFor the experiments on document pruning reported here we worked with a 5.5 % sample of the whole query set .\nThe reason behind this is merely practical : since we have much less machines compared to a commercial search engine it would take us about a year of computation to process all 462 million queries .\nFor our first experiment , we generate a document-pruned p-index of size s by using the Extended Keyword-Specific pruning ( EKS ) in Section 4 .\nWithin the p-index we measure the fraction of queries that can be guaranteed ( according to Theorem 4 ) to be correct .\nWe have performed the experiment for varying index sizes s and the result is shown in Figure 12 .\nBased on this figure , we can see that our document pruning algorithm performs well across the scale of index sizes s : for all index sizes larger than 40 % , we can guarantee the correct answer for about 70 % of the queries .\nThis implies that our EKS algorithm can successfully identify the necessary postings for calculating the top-20 results for 70 % of the queries by using at least 40 % of the full index size .\nFrom the figure , we can see that the optimal index size s = 0.20 when we use EKS as our pruning policy .\nWe can compare the two pruning schemes , namely the keyword pruning and EKS , by contrasting Figures 11 and 12 .\nOur observation is that , if we would have to pick one of the two pruning policies , then the two policies seem to be more or less equivalent for the p-index sizes s \u2264 20 % .\nFor the p-index sizes s > 20 % , keyword pruning does a much better job as it provides a higher number of guarantees at any given index size .\nLater in Section 5.3 , we discuss the combination of the two policies .\nIn our next experiment , we are interested in comparing EKS with the PR-based pruning policies described in Section 4.3 .\nTo this end , apart from EKS , we also generated document-pruned pindexes for the Global pr-based pruning ( GPR ) and the Local prbased pruning ( LPR ) policies .\nFor each of the polices we created document-pruned p-indexes of varying sizes s .\nSince GPR and LPR can not provide a correctness guarantee , we will compare the fraction of queries from each policy that are identical ( i.e. the same results in the same order ) to the top-k results calculated from the full index .\nHere , we will report our results for k = 20 ; the results are similar for other values of k .\nThe results are shown in Figure 13 .\nFigure 14 : Average fraction of the top-20 results of p-index with size s contained in top-20 results of the full index .\nFraction of queries guaranteed for top-20 per fraction of index , using keyword and document Figure 15 : Combining keyword and document pruning .\nThe horizontal axis shows the size s of the p-index ; the vertical axis shows the fraction f ( s ) of the queries whose top-20 results are identical to the top-20 results of the full index , for a given size s. By observing Figure 13 , we can see that GPR performs the worst of the three policies .\nOn the other hand EKS , picks up early , by answering a great fraction of queries ( about 62 % ) correctly with only 10 % of the index size .\nThe fraction of queries that LPR can answer remains below that of EKS until about s = 37 % .\nFor any index size larger than 37 % , LPR performs the best .\nIn the experiment of Figure 13 , we applied the strict definition that the results of the p-index have to be in the same order as the ones of the full index .\nHowever , in a practical scenario , it may be acceptable to have some of the results out of order .\nTherefore , in our next experiment we will measure the fraction of the results coming from an p-index that are contained within the results of the full index .\nThe result of the experiment is shown on Figure 14 .\nThe horizontal axis is , again , the size s of the p-index ; the vertical axis shows the average fraction of the top-20 results common with the top-20 results from the full index .\nOverall , Figure 14 depicts that EKS and LPR identify the same high ( \u2248 96 % ) fraction of results on average for any size s \u2265 30 % , with GPR not too far behind .\n5.3 Combining keyword and document pruning\nIn Sections 5.1 and 5.2 we studied the individual performance of our keyword and document pruning schemes .\nOne interesting question however is how do these policies perform in combination ?\nWhat fraction of queries can we guarantee if we apply both keyword and document pruning in our full index IF ?\nTo answer this question , we performed the following experiment .\nWe started with the full index IF and we applied keyword pruning to create an index IhP of size sh \u00b7 100 % of IF .\nAfter that , we further applied document pruning to IhP , and created our final pindex IP of size sv \u00b7 100 % of IhP .\nWe then calculated the fraction of guaranteed queries in IP .\nWe repeated the experiment for different values of sh and sv .\nThe result is shown on Figure 15 .\nThe x-axis shows the index size sh after applying keyword pruning ; the y-axis shows the index size sv after applying document pruning ; the z-axis\nshows the fraction of guaranteed queries after the two prunings .\nFor example the point ( 0.2 , 0.3 , 0.4 ) means that if we apply keyword pruning and keep 20 % of IF , and subsequently on the resulting index we apply document pruning keeping 30 % ( thus creating a pindex of size 20 % \u00b7 30 % = 6 % of IF ) we can guarantee 40 % of the queries .\nBy observing Figure 15 , we can see that for p-index sizes smaller than 50 % , our combined pruning does relatively well .\nFor example , by performing 40 % keyword and 40 % document pruning ( which translates to a pruned index with s = 0.16 ) we can provide a guarantee for about 60 % of the queries .\nIn Figure 15 , we also observe a `` plateau '' for sh > 0.5 and sv > 0.5 .\nFor this combined pruning policy , the optimal index size is at s = 0.13 , with sh = 0.46 and sv = 0.29 .\n6 .\nRELATED WORK\n[ 3 , 30 ] provide a good overview of inverted indexing in Web search engines and IR systems .\nExperimental studies and analyses of various partitioning schemes for an inverted index are presented in [ 6 , 23 , 33 ] .\nThe pruning algorithms that we have presented in this paper are independent of the partitioning scheme used .\nThe works in [ 1 , 5 , 7 , 20 , 27 ] are the most related to ours , as they describe pruning techniques based on the idea of keeping the postings that contribute the most in the final ranking .\nHowever , [ 1 , 5 , 7 , 27 ] do not consider any query-independent quality ( such as PageRank ) in the ranking function .\n[ 32 ] presents a generic framework for computing approximate top-k answers with some probabilistic bounds on the quality of results .\nOur work essentially extends [ 1 , 2 , 4 , 7 , 20 , 27 , 31 ] by proposing mechanisms for providing the correctness guarantee to the computed top-k results .\nSearch engines use various methods of caching as a means of reducing the cost associated with queries [ 18 , 19 , 21 , 31 ] .\nThis thread of work is also orthogonal to ours because a caching scheme may operate on top of our p-index in order to minimize the answer computation cost .\nThe exact ranking functions employed by current search engines are closely guarded secrets .\nIn general , however , the rankings are based on query-dependent relevance and queryindependent document `` quality . ''\nQuery-dependent relevance can be calculated in a variety of ways ( see [ 3 , 30 ] ) .\nSimilarly , there are a number of works that measure the `` quality '' of the documents , typically as captured through link-based analysis [ 17 , 28 , 26 ] .\nSince our work does not assume a particular form of ranking function , it is complementary to this body of work .\nThere has been a great body of work on top-k result calculation .\nThe main idea is to either stop the traversal of the inverted lists early , or to shrink the lists by pruning postings from the lists [ 14 , 4 , 11 , 8 ] .\nOur proof for the correctness indicator function was primarily inspired by [ 12 ] .\n7 .\nCONCLUDING REMARKS\nWeb search engines typically prune their large-scale inverted indexes in order to scale to enormous query loads .\nWhile this approach may improve performance , by computing the top results from a pruned index we may notice a significant degradation in the result quality .\nIn this paper , we provided a framework for new pruning techniques and answer computation algorithms that guarantee that the top matching pages are always placed at the top of search results in the correct order .\nWe studied two pruning techniques , namely keyword-based and document-based pruning as well as their combination .\nOur experimental results demonstrated that our algorithms can effectively be used to prune an inverted index without degradation in the quality of results .\nIn particular , a keyword-pruned index can guarantee 73 % of the queries with a size of 30 % of the full index , while a document-pruned index can guarantee 68 % of the queries with the same size .\nWhen we combine the two pruning algorithms we can guarantee 60 % of the queries with an index size of 16 % .\nIt is our hope that our work will help search engines develop better , faster and more efficient indexes and thus provide for a better user search experience on the Web ."} {"id": "C-22", "title": "", "abstract": "", "keyphrases": ["data", "object-orient applic", "mobil object framework", "mobjex", "java", "metricscontain", "metric collect", "proxi", "perform and scalabl", "measur", "propag and deliveri", "framework", "adapt", "mobil object"], "prmu": [], "lvl-1": "Runtime Metrics Collection for Middleware Supported Adaptation of Mobile Applications Hendrik Gani School of Computer Science and Information Technology, RMIT University, Melbourne, Australia hgani@cs.rmit.edu.au Caspar Ryan School of Computer Science and Information Technology, RMIT University, Melbourne, Australia caspar@cs.rmit.edu.au Pablo Rossi School of Computer Science and Information Technology, RMIT University, Melbourne, Australia pablo@cs.rmit.edu.au ABSTRACT This paper proposes, implements, and evaluates in terms of worst case performance, an online metrics collection strategy to facilitate application adaptation via object mobility using a mobile object framework and supporting middleware.\nThe solution is based upon an abstract representation of the mobile object system, which holds containers aggregating metrics for each specific component including host managers, runtimes and mobile objects.\nA key feature of the solution is the specification of multiple configurable criteria to control the measurement and propagation of metrics through the system.\nThe MobJeX platform was used as the basis for implementation and testing with a number of laboratory tests conducted to measure scalability, efficiency and the application of simple measurement and propagation criteria to reduce collection overhead.\nCategories and Subject Descriptors C.2.4 Distributed Systems; D.2.8 Metrics General Terms Measurement, Performance.\n1.\nINTRODUCTION The different capabilities of mobile devices, plus the varying speed, error rate and disconnection characteristics of mobile networks [1], make it difficult to predict in advance the exact execution environment of mobile applications.\nOne solution which is receiving increasing attention in the research community is application adaptation [2-7], in which applications adjust their behaviour in response to factors such as network, processor, or memory usage.\nEffective adaptation requires detailed and up to date information about both the system and the software itself.\nMetrics related to system wide information (e.g. processor, memory and network load) are referred to as environmental metrics [5], while metrics representing application behaviour are referred as software metrics [8].\nFurthermore, the type of metrics required for performing adaptation is dependent upon the type of adaptation required.\nFor example, service-based adaptation, in which service quality or service behaviour is modified in response to changes in the runtime environment, generally requires detailed environmental metrics but only simple software metrics [4].\nOn the other hand, adaptation via object mobility [6], also requires detailed software metrics [9] since object placement is dependent on the execution characteristics of the mobile objects themselves.\nWith the exception of MobJeX [6], existing mobile object systems such as Voyager [10], FarGo [11, 12], and JavaParty [13] do not provide automated adaptation, and therefore lack the metrics collection process required to support this process.\nIn the case of MobJeX, although an adaptation engine has been implemented [5], preliminary testing was done using synthetic pre-scripted metrics since there is little prior work on the dynamic collection of software metrics in mobile object frameworks, and no existing means of automatically collecting them.\nConsequently, the main contribution of this paper is a solution for dynamic metrics collection to support adaptation via object mobility for mobile applications.\nThis problem is non-trivial since typical mobile object frameworks consist of multiple application and middleware components, and thus metrics collection must be performed at different locations and the results efficiently propagated to the adaptation engine.\nFurthermore, in some cases the location where each metric should be collected is not fixed (i.e. it could be done in several places) and thus a decision must be made based on the efficiency of the chosen solution (see section 3).\nThe rest of this paper is organised as follows: Section 2 describes the general structure and implementation of mobile object frameworks in order to understand the challenges related to the collection, propagation and delivery of metrics as described in section 3.\nSection 4 describes some initial testing and results and section 5 closes with a summary, conclusions and discussion of future work.\n2.\nBACKGROUND In general, an object-oriented application consists of objects collaborating to provide the functionality required by a given problem domain.\nMobile object frameworks allow some of these objects to be tagged as mobile objects, providing middleware support for such objects to be moved at runtime to other hosts.\nAt a minimum, a mobile object framework with at least one running mobile application consists of the following components: runtimes, mobile objects, and proxies [14], although the terminology used by individual frameworks can differ [6, 10-13].\nA runtime is a container process for the management of mobile objects.\nFor example, in FarGo [15] this component is known as a core and in most systems separate runtimes are required to allow different applications to run independently, although this is not the case with MobJeX, which can run multiple applications in a single runtime using threads.\nThe applications themselves comprise mobile objects, which interact with each other through proxies [14].\nProxies, which have the same method interface as the object itself but add remote communication and object tracking functionality, are required for each target object that a source object communicates with.\nUpon migration, proxy objects move with the source object.\nThe Java based system MobJeX, which is used as the implementation platform for the metrics collection solution described in this paper, adds a number of additional middleware components.\nFirstly, a host manager (known as a service in MobJeX) provides a central point of communication by running on a known port on a per host basis, thus facilitating the enumeration or lookup of components such as runtimes or mobile objects.\nSecondly, MobJeX has a per-application mobile object container called a transport manager (TM).\nAs such the host and transport managers are considered in the solution provided in the next section but could be omitted in the general case.\nFinally, depending on adaptation mode, MobJeX can have a centralised system controller incorporating a global adaptation engine for performing system wide optimisation.\n3.\nMETRICS COLLECTION This section discusses the design and derivation of a solution for collecting metrics in order to support the adaptation of applications via object migration.\nThe solution, although implemented within the MobJeX framework, is for the most part discussed in generic terms, except where explicitly stated to be MobJeX specific.\n3.1 Metrics Selection The metrics of Ryan and Rossi [9] have been chosen as the basis for this solution, since they are specifically intended for mobile application adaptation as well as having been derived from a series of mathematical models and empirically validated.\nFurthermore, the metrics were empirically shown to improve the application performance in a real adaptation scenario following a change in the execution environment.\nIt would however be beyond the scope of this paper to implement and test the full suite of metrics listed in [9], and thus in order to provide a useful non-random subset, we chose to implement the minimum set of metrics necessary to implement local and global adaptation [9] and thereby satisfy a range of real adaptation scenarios.\nAs such the solution presented in this section is discussed primarily in terms of these metrics, although the structure of the solution is intended to support the implementation of the remaining metrics, as well as other unspecified metrics such as those related to quality and resource utilisation.\nThis subset is listed below and categorised according to metric type.\nNote that some additional metrics were used for implementation purposes in order to derive core metrics or assist the evaluation, and as such are defined in context where appropriate.\n1.\nSoftware metrics - Number of Invocations (NI), the frequency of invocations on methods of a class.\n2.\nPerformance metrics - Method Execution Time (ET), the time taken to execute a method body (ms).\n- Method Invocation Time (IT), the time taken to invoke a method, excluding the method execution time (ms).\n3.\nResource utilization metrics - Memory Usage (MU), the memory usage of a process (in bytes).\n- Processor Usage (PU), the percentage of the CPU load of a host.\n- Network Usage (NU), the network bandwidth between two hosts (in bytes/sec).\nFollowing are brief examples of a number of these metrics in order to demonstrate their usage in an adaptation scenario.\nAs Processor Usage (PU) on a certain host increases, the Execution Time (ET) of a given method executed on that host also increases [9], thus facilitating the decision of whether to move an object with high ET to another host with low PU.\nInvocation Time (IT) shows the overhead of invoking a certain method, with the invocation overhead of marshalling parameters and transmitting remote data for a remote call being orders of magnitude higher than the cost of pushing and popping data from the method call stack.\nIn other words, remote method invocation is expensive and thus should be avoided unless the gains made by moving an object to a host with more processing power (thereby reducing ET) outweigh the higher IT of the remote call.\nFinally, Number of Invocations (NI) is used primarily as a weighting factor or multiplier in order to enable the adaptation engine to predict the value over time of a particular adaptation decision.\n3.2 Metrics Measurement This subsection discusses how each of the metrics in the subset under investigation can be obtained in terms of either direct measurement or derivation, and where in the mobile object framework such metrics should actually be measured.\nOf the environmental resource metrics, Processor Usage (PU) and Network Usage (NU) both relate to an individual machine, and thus can be directly measured through the resource monitoring subsystem that is instantiated as part of the MobJeX service.\nHowever, Memory Usage (MU), which represents the memory state of a running process rather than the memory usage of a host, should instead be collected within an individual runtime.\nThe measurement of Number of Invocations (NI) and Execution Time (ET) metrics can be also be performed via direct measurement, however in this case within the mobile object implementation (mobject) itself.\nNI involves simply incrementing a counter value at either the start or end of a method call, depending upon the desired semantics with regard to thrown exceptions, while ET can be measured by starting a timer at the beginning of the method and stopping it at the end of the method, then retrieving the duration recorded by the timer.\nIn contrast, collecting Invocation Time (IT) is not as straight forward because the time taken to invoke a method can only be measured after the method finishes its execution and returns to the caller.\nIn order to collect IT metrics, another additional metric is needed.\nRyan and Rossi [9] define the metric Response Time (RT), as the total time taken for a method call to finish, which is the sum of IT and ET.\nThe Response Time can be measured directly using the same timer based technique used to measure ET, although at the start and end of the proxy call rather than the method implementation.\nOnce the Response Time (RT) is known, IT can derived by subtracting RT from ET.\nAlthough this derivation appears simple, in practice it is complicated by the fact that the RT and ET values from which the IT is derived are by necessity measured using timer code in different locations i.e. RT measured in the proxy, ET measured in the method body of the object implementation.\nIn addition, the proxies are by definition not part of the MobJeX containment hierarchy, since although proxies have a reference to their target object, it is not efficient for a mobile object (mobject) to have backward references to all of the many proxies which reference it (one per source object).\nFortunately, this problem can be solved using the push based propagation mechanism described in section 3.5 in which the RT metric is pushed to the mobject so that IT can be derived from the ET value stored there.\nThe derived value of IT is then stored and propagated further as necessary according to the criteria of section 3.6, the structural relationship of which is shown in Figure 1.\n3.3 Measurement Initiation The polling approach was identified as the most appropriate method for collecting resource utilisation metrics, such as Processor Usage (PU), Network Usage (NU) and Memory Usage (MU), since they are not part of, or related to, the direct flow of the application.\nTo measure PU or NU, the resource monitor polls the Operating System for the current CPU or network load respectively.\nIn the case of Memory Usage (MU), the Java Virtual Machine (JVM) [16] is polled for the current memory load.\nNote that in order to minimise the impact on application response time, the polling action should be done asynchronously in a separate thread.\nMetrics that are suitable for application initiated collection (i.e. as part of a normal method call) are software and performance related metrics, such as Number of Invocations (NI), Execution Time (ET), and Invocation Time (IT), which are explicitly related to the normal invocation of a method, and thus can be measured directly at this time.\n3.4 Metrics Aggregation In the solution presented in this paper, all metrics collected in the same location are aggregated in a MetricsContainer with individual containers corresponding to functional components in the mobile object framework.\nThe primary advantage of aggregating metrics in containers is that it allows them to be propagated easily as a cohesive unit through the components of the mobility framework so that they can be delivered to the adaptation engine, as discussed in the following subsection.\nNote that this containment captures the different granularity of measurement attributes and their corresponding metrics.\nConsider the case of measuring memory consumption.\nAt a coarse level of granularity this could be measured for an entire application or even a system, but could also be measured at the level of an individual object; or for an even finer level of granularity, the memory consumption during the execution of a specific method.\nAs an example of the level of granularity required for mobility based adaptation, the local adaptation algorithm proposed by Ryan and Rossi [9] requires metrics representing both the duration of a method execution and the overhead of a method invocation.\nThe use of metrics containers facilitates the collection of metrics at levels of granularity ranging from a single machine down to the individual method level.\nNote that some metrics containers do not contain any Metric objects, since as previously described, the sample implementation uses only a subset of the adaptation metrics from [9].\nHowever, for the sake of consistency and to promote flexibility in terms of adding new metrics in the future, these containers are still considered in the present design for completeness and for future work.\n3.5 Propagation and Delivery of Metrics The solution in this paper identifies two stages in the metrics collection and delivery process.\nFirstly, the propagation of metrics through the components of the mobility framework and secondly, the delivery of those metrics from the host manager/service (or runtime if the host manager is not present) to the adaptation engine.\nRegarding propagation, in brief, it is proposed that when a lower level system component detects the arrival of a new metric update (e.g. mobile object), the metric is pushed (possibly along with other relevant metrics) to the next level component (i.e. runtime or transport manager containing the mobile object), which at some later stage, again determined by a configurable criteria (for example when there are a sufficient number of changed mobjects) will get pushed to the next level component (i.e. the host manager or the adaptation engine).\nA further incentive for treating propagation separately from delivery is due to the distinction between local and global adaptation [9].\nLocal adaptation is performed by an engine running on the local host (for example in MobJeX this would occur within the service) and thus in this case the delivery phase would be a local inter-process call.\nConversely, global adaptation is handled by a centralised adaptation engine running on a remote host and thus the delivery of metrics is via a remote call, and in the case where multiple runtimes exist without a separate host manager the delivery process would be even more expensive.\nTherefore, due to the presence of network communication latency, it is important for the host manager to pass as many metrics as possible to the adaptation engine in one invocation, implying the need to gather these metrics in the host manager, through some form of push or propagation, before sending them to the adaptation engine.\nConsequently, an abstract representation or model [17] of the system needs to be maintained.\nSuch a model would contain model entities, corresponding to each of the main system components, connected in a tree like hierarchy, which precisely reflects the structure and containment hierarchy of the actual system.\nAttaching metrics containers to model entities allows a model entity representing a host manager to be delivered to the adaptation engine enabling it to access all metrics in that component and any of its children (i.e. runtimes, and mobile objects).\nFurthermore it would generally be expected that an adaptation engine or system controller would already maintain a model of the system that can not only be reused for propagation but also provides an effective means of delivering metrics information from the host manager to the adaptation engine.\nThe relationship between model entities and metrics containers is captured in Figure 1.\n3.6 Propagation and Delivery Criteria This subsection proposes flexible criteria to allow each component to decide when it should propagate its metrics to the next component in line (Figure 1), in order to reduce the overhead incurred when metrics are unnecessarily propagated through the components of the mobility framework and delivered to the adaptation engine.\nThis paper proposes four different types of criterion that are executed at various stages of the measurement and propagation process in order to determine whether the next action should be taken or not.\nThis approach was designed such that whenever a single criterion is not satisfied, the subsequent criteria are not tested.\nThese four criteria are described in the following subsections.\nMeasure Metric Criterion - This criterion is attached to individual Metric objects to decide whether a new metric value should be measured or not.\nThis is most useful in the case where it is expensive to measure a particular metric.\nFurthermore, this criterion can be used as a mechanism for limiting storage requirements and manipulation overhead in the case where metric history is maintained.\nSimple examples would be either time or frequency based whereas more complex criteria could be domain specific for a particular metric, or based upon information stored in the metrics history.\nNotify Metrics Container Criterion - This criterion is also attached to individual Metric objects and is used to determine the circumstances under which the Metric object should notify its MetricsContainer.\nThis is based on the assumption that there may be cases where it is desirable to measure and store a metric in the history for the analysis of temporal behaviour, but is not yet significant enough to notify the MetricsContainer for further processing.\nA simple example of this criterion would be threshold based in which the newest metric value is compared with the previously stored value to determine whether the difference is significant enough to be of any interest to the MetricsContainer.\nA more complex criterion could involve analysis of the history to determine whether a pattern of recent changes is significant enough to warrant further processing and possible metrics delivery.\nNotify Model Entity Criterion - Unlike the previous two criteria, this criterion is associated with a MetricsContainer.\nSince a MetricsContainer can have multiple Metric objects, of which it has explicit domain knowledge, it is able to determine if, when, and how many of these metrics should be propagated to the ModelEntity and thus become candidates for being part of the hierarchical ModelEntity push process as described below.\nThis decision making is facilitated by the notifications received from individual Metric objects as described above.\nA simple implementation would be waiting for a certain number of updates before sending a notification to the model entity.\nFor example, since the MobjectMetricsContainer object contains three metrics, a possible criteria would be to check if two or more of the metrics have changed.\nA slightly more advanced implementation can be done by giving each metric a weight to indicate how significant it is in the adaptation decision making process.\nPush Criterion - The push criterion applies to all of the ModelEntites which are containers, that is the TransportManagerModelEntity, RuntimeModelEntity and ServiceModelEntity, as well as the special case of the ProxyMetricsContainer.\nThe purpose of this criterion is twofold.\nFor the TransportManagerModelEntity this serves as a criterion to determine notification since as with the previously described criteria, a local reference is involved.\nFor the other model entities, this serves as an opportunity to determine both when and what metrics should be pushed to the parent container wherein the case of the ServiceModelEntity the parent is the adaptation engine itself or in the case of the ProxyMetricsContainer the target of the push is the MobjectMetricsContainer.\nFurthermore, this criterion is evaluated using information from two sources.\nFirstly, it responds to the notification received from its own MetricsContainer but more importantly it serves to keep track of notifications from its child ModelEntities so as to determine when and what metrics information should be pushed to its parent or target.\nIn the specialised case of the push criterion for the proxy, the decision making is based on both the ProxyMetricsContainer itself, as well as the information accumulated from the individual ProxyMethodMetricsContainers.\nNote that a push criterion is not required for a mobject since it does not have any containment or aggregating responsibilities since this is already Service Model Entity Service Metrics Container Notify Model Entity Criterion Runtime Model Entity Runtime Metrics Container Notify Model Entity Criterion Transport Manager Model Entity Transport Manager Metrics Container Notify Model Entity Criterion Push Criterion Mobject Model Entity Mobject Method Metrics Notify Model Entity Criterion Push Criterion Push Criterion To adaptation engine Mobject Metrics Container Notify Metrics Container Criterion Measure Metric Criterion Metric 1 NotifyMetrics Container Criterion Notify Metrics Container Criterion Measure Metric CriterionProxyMethod Metrics Containers RT Metric Notify Metrics Container Criterion ProxyMetrics Container Push Criterion Measure Metric Criterion Metric 2 Measure Metric Criterion Metric 1 1.\n.\nn not currently implemented Notify Metrics Container Criterion Metric 1 Metric 2 Measure Metric Criterion Measure Metric Criterion Notify Metrics Container Criterion MU Metric Measure Metric Criterion Notify Metrics Container Criterion ET Metric IT Metric NI Metric Measure Metric Criterion Measure Metric Criterion Measure Metric Criterion Notify Metrics Container Criterion NU Metric PU Metric Measure Metric Criterion Measure Metric Criterion 1.\n.\nn Figure 1.\nStructural overview of the hierarchical and criteriabased notification relationships between Metrics, Metrics Containers, and Model Entities handled by the MobjectMetricsContainer and its individual MobjectMethodMetricsContainers.\nAlthough it is always important to reduce the number of pushes, this is especially so from a service to a centralised global adaptation engine, or from a proxy to a mobject.\nThis is because these relationships involve a remote call [18] which is expensive due to connection setup and data marshalling and unmarshalling overhead, and thus it is more efficient to send a given amount of data in aggregate form rather than sending smaller chunks multiple times.\nA simple implementation for reducing the number of pushes can be done using the concept of a process period [19] in which case the model entity accumulates pushes from its child entities until the process period expires at which time it pushes the accumulated metrics to its parent.\nAlternatively it could be based on frequency using domain knowledge about the type of children for example when a significant number of mobjects in a particular application (i.e. TransportManager) have undergone substantial changes.\nFor reducing the size of pushed data, two types of pushes were considered: shallow push and deep push.\nWith shallow push, a list of metrics containers that contain updated metrics is pushed.\nIn a deep push, the model entity itself is pushed, along with its metrics container and its child entities, which also have reference to metrics containers but possibly unchanged metrics.\nIn the case of the proxy, a deep push involves pushing the ProxyMetricsContainer and all of the ProxyMethodMetricsContainers whereas a shallow push means only the ProxyMethodMetricsContainers that meet a certain criterion.\n4.\nEVALUATION The preliminary tests presented in this section aim to analyse the performance and scalability of the solution and evaluate the impact on application execution in terms of metrics collection overhead.\nAll tests were executed using two Pentium 4 3.0 GHz PCs with 1,024 MB of RAM, running Java 1.4.2_08.\nThe two machines were connected to a router with a third computer acting as a file server and hosting the external adaptation engine implemented within the MobJeX system controller, thereby simulating a global adaptation scenario.\nSince only a limited number of tests could be executed, this evaluation chose to measure the worst case scenario in which all metrics collection was initiated in mobjects, wherein the propagation cost is higher than for any other metrics collected in the system.\nIn addition, since exhaustive testing of criteria is beyond the scope of this paper, two different types of criteria were used in the tests.\nThe measure metrics criterion was chosen, since this represents the starting point of the measurement process and can control under what circumstances and how frequently metrics are measured.\nIn addition, the push criterion was also implemented on the service, in order to provide an evaluation of controlling the frequency of metrics delivery to the adaptation engine.\nAll other (update and push) criteria were set to always meaning that they always evaluated to true and thus a notification was posted.\nFigure 2 shows the metric collection overhead in the mobject (MMCO), for different numbers of mobjects and methods when all criteria are set to always to provide the maximum measurement and propagation of metrics and thus an absolute worst case performance scenario.\nIt can be seen that the independent factors of increasing the number of mobjects and methods independently are linear.\nAlthough combining these together provides an exponential growth that is approximately n-squared, the initial results are not discouraging since delivering all of the metrics associated with 20 mobjects, each having 20 methods (which constitutes quite a large application given that mobjects typically represent coarse grained object clusters) is approximately 400ms, which could reasonably be expected to be offset with adaptation gains.\nNote that in contrast, the proxy metrics collection overhead (PMCO) was relatively small and constant at < 5ms, since in the absence of a proxy push criterion (this was only implemented on the service) the response time (RT) data for a single method is pushed during every invocation.\n50 150 250 350 450 550 1 5 10 15 20 25 Number of Mobjects/Methods MobjectMetricsCollectionOverheadMMCO(ms) Methods Mobjects Both Figure 2.\nWorst case performance characteristics The next step was to determine the percentage metrics collection overhead compared with execution time in order to provide information about the execution characteristics of objects that would be suitable for adaptation using this metric collection approach.\nClearly, it is not practical to measure metrics and perform adaptation on objects with short execution times that cannot benefit from remote execution on hosts with greater processing power, thereby offsetting IT overhead of remote compared with local execution as well as the cost of object migration and the metrics collection process itself.\nIn addition, to demonstrate the effect of using simple frequency based criteria, the MMCO results as a percentage of method execution time were plotted as a 3-dimensional graph in Figure 3 with the z-axis representing the frequency used in both the measure metrics criterion and the service to adaptation engine push criterion.\nThis means that for a frequency value of 5 (n=5), metrics are only measured on every fifth method call, which then results in a notification through the model entity hierarchy to the service, on this same fifth invocation.\nFurthermore, the value of n=5 was also applied to the service push criterion so that metrics were only pushed to the adaptation engine after five such notifications, that is for example five different mobjects had updated their metrics.\nThese results are encouraging since even for the worst case scenario of n=1 the metric collection overhead is an acceptable 20% for a method of 1500ms duration (which is relatively short for a component or service level object in a distributed enterprise class application) with previous work on adaptation showing that such an overhead could easily be recovered by the efficiency gains made by adaptation [5].\nFurthermore, the measurement time includes delivering the results synchronously via a remote call to the adaptation engine on a different host, which would normally be done asynchronously, thus further reducing the impact on method execution performance.\nThe graph also demonstrates that even using modest criteria to reduce the metrics measurement to more realistic levels, has a rapid improvement on collection overhead at 20% for 500ms of ET.\n0 1000 2000 3000 4000 5000 1 2 3 4 5 6 0 20 40 60 80 100 120 MMCO (%) ET (milliseconds) N (interval) MMCO (%) Figure 3.\nPerformance characteristics with simple criteria 5.\nSUMMARY AND CONCLUSIONS Given the challenges of developing mobile applications that run in dynamic/heterogeneous environments, and the subsequent interest in application adaptation, this paper has proposed and implemented an online metrics collection strategy to assist such adaptation using a mobile object framework and supporting middleware.\nControlled lab studies were conducted to determine worst case performance, as well as show the reduction in collection overhead when applying simple collection criteria.\nIn addition, further testing provided an initial indication of the characteristics of application objects (based on method execution time) that would be good candidates for adaptation using the worst case implementation of the proposed metrics collection strategy.\nA key feature of the solution was the specification of multiple configurable criteria to control the propagation of metrics through the system, thereby reducing collection overhead.\nWhile the potentially efficacy of this approach was tested using simple criteria, given the flexibility of the approach we believe there are many opportunities to significantly reduce collection overhead through the use of more sophisticated criteria.\nOne such approach could be based on maintaining metrics history in order to determine the temporal behaviour of metrics and thus make more intelligent and conservative decisions regarding whether a change in a particular metric is likely to be of interest to the adaptation engine and should thus serve as a basis for notification for inclusion in the next metrics push.\nFurthermore, such a temporal history could also facilitate intelligent decisions regarding the collection of metrics since for example a metric that is known to be largely constant need not be frequently measured.\nFuture work will also involve the evaluation of a broad range of adaptation scenarios on the MobJeX framework to quantity the gains that can be made via adaptation through object mobility and thus demonstrate in practise, the efficacy of the solution described in this paper.\nFinally, the authors wish to explore applying the metrics collection concepts described in this paper to a more general and reusable context management system [20].\n6.\nREFERENCES 1.\nKatz, R.H., Adaptation and Mobility in Wireless Information Systems.\nIEEE Personal Communications, 1994.\n1: p. 6-17.\n2.\nHirschfeld, R. and Kawamura, K. Dynamic Service Adaptation.\nin ICDCS Workshops``04.\n2004.\n3.\nLemlouma, T. and Layaida, N. Context-Aware Adaptation for Mobile Devices.\nin Proceedings of IEEE International Conference on Mobile Data Management 2004.\n2004.\n4.\nNoble, B.D., et al..\nAgile Application-Aware Adaptation for Mobility.\nin Proc.\nof the 16th ACM Symposium on Operating Systems and Principles SOSP.\n1997.\nSaint-Malo, France.\n5.\nRossi, P. and Ryan, C.\nAn Empirical Evaluation of Dynamic Local Adaptation for Distributed Mobile Applications.\nin Proc.\nof 2005 International Symposium on Distributed Objects and Applications (DOA 2005).\n2005.\nLarnaca, Cyprus: SpringerVerlag.\n6.\nRyan, C. and Westhorpe, C. Application Adaptation through Transparent and Portable Object Mobility in Java.\nin International Symposium on Distributed Objects and Applications (DOA 2004).\n2004.\nLarnaca, Cyprus: SpringerVerlag.\n7.\nda Silva e Silva, F.J., Endler, M., and Kon, F. Developing Adaptive Distributed Applications: A Framework Overview and Experimental Results.\nin On The Move to Meaningful Internet Systems 2003: CoopIS, DOA, and ODBASE (LNCS 2888).\n2003.\n8.\nRossi, P. and Fernandez, G. Definition and validation of design metrics for distributed applications.\nin Ninth International Software Metrics Symposium.\n2003.\nSydney: IEEE.\n9.\nRyan, C. and Rossi, P. Software, Performance and Resource Utilisation Metrics for Context Aware Mobile Applications.\nin Proceedings of International Software Metrics Symposium IEEE Metrics 2005.\n2005.\nComo, Italy.\n10.\nRecursion Software Inc..\nVoyager URL: http://www.recursionsw.com/voyager.htm.\n2005.\n11.\nHolder, O., Ben-Shaul, I., and Gazit, H., System Support for Dynamic Layout of Distributed Applications.\n1998, TechinonIsrael Institute of Technology.\np. 163 - 173.\n12.\nHolder, O., Ben-Shaul, I., and Gazit, H. Dynamic Layout of Distributed Applications in FarGo.\nin 21st Int'l Conf.\nSoftware Engineering (ICSE'99).\n1999: ACM Press.\n13.\nPhilippsen, M. and Zenger, M., JavaParty - Transparent Remote Objects in Java.\nConcurrency: Practice and Experience, 1997.\n9(11): p. 1225-1242.\n14.\nShapiro, M. Structure and Encapsulation in Distributed Systems: the Proxy Principle.\nin Proc.6th Intl..\nConference on Distributed Computing Systems.\n1986.\nCambridge, Mass. (USA): IEEE.\n15.\nGazit, H., Ben-Shaul, I., and Holder, O. Monitoring-Based Dynamic Relocation of Components in Fargo.\nin Proceedings of the Second International Symposium on Agent Systems and Applications and Fourth International Symposium on Mobile Agents.\n2000.\n16.\nLindholm, T. and Yellin, F., The Java Virtual Machine Specification 2nd Edition.\n1999: Addison-Wesley.\n17.\nRandell, L.G., Holst, L.G., and Bolmsj\u00f6, G.S. Incremental System Development of Large Discrete-Event Simulation Models.\nin Proceedings of the 31st conference on Winter Simulation.\n1999.\nPhoenix, Arizona.\n18.\nWaldo, J., Remote Procedure Calls and Java Remote Method Invocation.\nIEEE Concurrency, 1998.\n6(3): p. 5-7.\n19.\nRolia, J. and Lin, B. Consistency Issues in Distributed Application Performance Metrics.\nin Proceedings of the 1994 Conference of the Centre for Advanced Studies on Collaborative Research.\n1994.\nToronto, Canada.\n20.\nHenricksen, K. and Indulska, J.\nA software engineering framework for context-aware pervasive computing.\nin Proceedings of the 2nd IEEE Conference on Pervasive Computing and Communications (PerCom).\n2004.\nOrlando.", "lvl-3": "Runtime Metrics Collection for Middleware Supported Adaptation of Mobile Applications\nABSTRACT\nThis paper proposes , implements , and evaluates in terms of worst case performance , an online metrics collection strategy to facilitate application adaptation via object mobility using a mobile object framework and supporting middleware .\nThe solution is based upon an abstract representation of the mobile object system , which holds containers aggregating metrics for each specific component including host managers , runtimes and mobile objects .\nA key feature of the solution is the specification of multiple configurable criteria to control the measurement and propagation of metrics through the system .\nThe MobJeX platform was used as the basis for implementation and testing with a number of laboratory tests conducted to measure scalability , efficiency and the application of simple measurement and propagation criteria to reduce collection overhead .\n1 .\nINTRODUCTION\nThe different capabilities of mobile devices , plus the varying speed , error rate and disconnection characteristics of mobile networks [ 1 ] , make it difficult to predict in advance the exact execution environment of mobile applications .\nOne solution which is receiving increasing attention in the research community is application adaptation [ 2-7 ] , in which applications adjust their behaviour in response to factors such as network , processor , or memory usage .\nEffective adaptation requires detailed and up to date information about both the system and the software itself .\nMetrics related to system wide information ( e.g. processor , memory and network load ) are referred to as environmental metrics [ 5 ] , while metrics representing application behaviour are referred as\nsoftware metrics [ 8 ] .\nFurthermore , the type of metrics required for performing adaptation is dependent upon the type of adaptation required .\nFor example , service-based adaptation , in which service quality or service behaviour is modified in response to changes in the runtime environment , generally requires detailed environmental metrics but only simple software metrics [ 4 ] .\nOn the other hand , adaptation via object mobility [ 6 ] , also requires detailed software metrics [ 9 ] since object placement is dependent on the execution characteristics of the mobile objects themselves .\nWith the exception of MobJeX [ 6 ] , existing mobile object systems such as Voyager [ 10 ] , FarGo [ 11 , 12 ] , and JavaParty [ 13 ] do not provide automated adaptation , and therefore lack the metrics collection process required to support this process .\nIn the case of MobJeX , although an adaptation engine has been implemented [ 5 ] , preliminary testing was done using synthetic pre-scripted metrics since there is little prior work on the dynamic collection of software metrics in mobile object frameworks , and no existing means of automatically collecting them .\nConsequently , the main contribution of this paper is a solution for dynamic metrics collection to support adaptation via object mobility for mobile applications .\nThis problem is non-trivial since typical mobile object frameworks consist of multiple application and middleware components , and thus metrics collection must be performed at different locations and the results efficiently propagated to the adaptation engine .\nFurthermore , in some cases the location where each metric should be collected is not fixed ( i.e. it could be done in several places ) and thus a decision must be made based on the efficiency of the chosen solution ( see section 3 ) .\nThe rest of this paper is organised as follows : Section 2 describes the general structure and implementation of mobile object frameworks in order to understand the challenges related to the collection , propagation and delivery of metrics as described in section 3 .\nSection 4 describes some initial testing and results and section 5 closes with a summary , conclusions and discussion of future work .\n2 .\nBACKGROUND\nIn general , an object-oriented application consists of objects collaborating to provide the functionality required by a given problem domain .\nMobile object frameworks allow some of these objects to be tagged as mobile objects , providing middleware support for such objects to be moved at runtime to other hosts .\nAt a minimum , a mobile object framework with at least one running mobile application consists of the following components : runtimes , mobile objects , and proxies [ 14 ] , although the terminology used by individual frameworks can differ [ 6 , 10-13 ] .\nA runtime is a container process for the management of mobile objects .\nFor example , in FarGo [ 15 ] this component is known as a core and in most systems separate runtimes are required to allow different applications to run independently , although this is not the case with MobJeX , which can run multiple applications in a single runtime using threads .\nThe applications themselves comprise mobile objects , which interact with each other through proxies [ 14 ] .\nProxies , which have the same method interface as the object itself but add remote communication and object tracking functionality , are required for each target object that a source object communicates with .\nUpon migration , proxy objects move with the source object .\nThe Java based system MobJeX , which is used as the implementation platform for the metrics collection solution described in this paper , adds a number of additional middleware components .\nFirstly , a host manager ( known as a service in MobJeX ) provides a central point of communication by running on a known port on a per host basis , thus facilitating the enumeration or lookup of components such as runtimes or mobile objects .\nSecondly , MobJeX has a per-application mobile object container called a transport manager ( TM ) .\nAs such the host and transport managers are considered in the solution provided in the next section but could be omitted in the general case .\nFinally , depending on adaptation mode , MobJeX can have a centralised system controller incorporating a global adaptation engine for performing system wide optimisation .\n3 .\nMETRICS COLLECTION\n3.1 Metrics Selection\n3 .\nResource utilization metrics\n3.2 Metrics Measurement\n3.3 Measurement Initiation\n3.4 Metrics Aggregation\n3.5 Propagation and Delivery of Metrics\n3.6 Propagation and Delivery Criteria\nNotify Metrics Container Criterion - This criterion is also\nNotify Model Entity Criterion - Unlike the previous two\n4 .\nEVALUATION\n5 .\nSUMMARY AND CONCLUSIONS\nGiven the challenges of developing mobile applications that run in dynamic/heterogeneous environments , and the subsequent interest in application adaptation , this paper has proposed and implemented an online metrics collection strategy to assist such adaptation using a mobile object framework and supporting middleware .\nControlled lab studies were conducted to determine worst case performance , as well as show the reduction in collection overhead when applying simple collection criteria .\nIn addition , further testing provided an initial indication of the characteristics of application objects ( based on method execution time ) that would be good candidates for adaptation using the worst case implementation of the proposed metrics collection strategy .\nA key feature of the solution was the specification of multiple configurable criteria to control the propagation of metrics through the system , thereby reducing collection overhead .\nWhile the potentially efficacy of this approach was tested using simple criteria , given the flexibility of the approach we believe there are many opportunities to significantly reduce collection overhead through the use of more sophisticated criteria .\nOne such approach could be based on maintaining metrics history in order to determine the temporal behaviour of metrics and thus make more intelligent and conservative decisions regarding whether a change in a particular metric is likely to be of interest to the adaptation engine and should thus serve as a basis for notification for inclusion in the next metrics push .\nFurthermore , such a temporal history could also facilitate intelligent decisions regarding the collection of metrics since for example a metric that is known to be largely constant need not be frequently measured .\nFuture work will also involve the evaluation of a broad range of adaptation scenarios on the MobJeX framework to quantity the gains that can be made via adaptation through object mobility and thus demonstrate in practise , the efficacy of the solution described in this paper .\nFinally , the authors wish to explore applying the metrics collection concepts described in this paper to a more general and reusable context management system [ 20 ] .", "lvl-4": "Runtime Metrics Collection for Middleware Supported Adaptation of Mobile Applications\nABSTRACT\nThis paper proposes , implements , and evaluates in terms of worst case performance , an online metrics collection strategy to facilitate application adaptation via object mobility using a mobile object framework and supporting middleware .\nThe solution is based upon an abstract representation of the mobile object system , which holds containers aggregating metrics for each specific component including host managers , runtimes and mobile objects .\nA key feature of the solution is the specification of multiple configurable criteria to control the measurement and propagation of metrics through the system .\nThe MobJeX platform was used as the basis for implementation and testing with a number of laboratory tests conducted to measure scalability , efficiency and the application of simple measurement and propagation criteria to reduce collection overhead .\n1 .\nINTRODUCTION\nEffective adaptation requires detailed and up to date information about both the system and the software itself .\nMetrics related to system wide information ( e.g. processor , memory and network load ) are referred to as environmental metrics [ 5 ] , while metrics representing application behaviour are referred as\nsoftware metrics [ 8 ] .\nFurthermore , the type of metrics required for performing adaptation is dependent upon the type of adaptation required .\nFor example , service-based adaptation , in which service quality or service behaviour is modified in response to changes in the runtime environment , generally requires detailed environmental metrics but only simple software metrics [ 4 ] .\nOn the other hand , adaptation via object mobility [ 6 ] , also requires detailed software metrics [ 9 ] since object placement is dependent on the execution characteristics of the mobile objects themselves .\nWith the exception of MobJeX [ 6 ] , existing mobile object systems such as Voyager [ 10 ] , FarGo [ 11 , 12 ] , and JavaParty [ 13 ] do not provide automated adaptation , and therefore lack the metrics collection process required to support this process .\nIn the case of MobJeX , although an adaptation engine has been implemented [ 5 ] , preliminary testing was done using synthetic pre-scripted metrics since there is little prior work on the dynamic collection of software metrics in mobile object frameworks , and no existing means of automatically collecting them .\nConsequently , the main contribution of this paper is a solution for dynamic metrics collection to support adaptation via object mobility for mobile applications .\nThis problem is non-trivial since typical mobile object frameworks consist of multiple application and middleware components , and thus metrics collection must be performed at different locations and the results efficiently propagated to the adaptation engine .\nThe rest of this paper is organised as follows : Section 2 describes the general structure and implementation of mobile object frameworks in order to understand the challenges related to the collection , propagation and delivery of metrics as described in section 3 .\nSection 4 describes some initial testing and results and section 5 closes with a summary , conclusions and discussion of future work .\n2 .\nBACKGROUND\nIn general , an object-oriented application consists of objects collaborating to provide the functionality required by a given problem domain .\nMobile object frameworks allow some of these objects to be tagged as mobile objects , providing middleware support for such objects to be moved at runtime to other hosts .\nAt a minimum , a mobile object framework with at least one running mobile application consists of the following components : runtimes , mobile objects , and proxies [ 14 ] , although the terminology used by individual frameworks can differ [ 6 , 10-13 ] .\nA runtime is a container process for the management of mobile objects .\nFor example , in FarGo [ 15 ] this component is known as a core and in most systems separate runtimes are required to allow different applications to run independently , although this is not the case with MobJeX , which can run multiple applications in a single runtime using threads .\nThe applications themselves comprise mobile objects , which interact with each other through proxies [ 14 ] .\nUpon migration , proxy objects move with the source object .\nThe Java based system MobJeX , which is used as the implementation platform for the metrics collection solution described in this paper , adds a number of additional middleware components .\nFirstly , a host manager ( known as a service in MobJeX ) provides a central point of communication by running on a known port on a per host basis , thus facilitating the enumeration or lookup of components such as runtimes or mobile objects .\nSecondly , MobJeX has a per-application mobile object container called a transport manager ( TM ) .\nAs such the host and transport managers are considered in the solution provided in the next section but could be omitted in the general case .\nFinally , depending on adaptation mode , MobJeX can have a centralised system controller incorporating a global adaptation engine for performing system wide optimisation .\n5 .\nSUMMARY AND CONCLUSIONS\nGiven the challenges of developing mobile applications that run in dynamic/heterogeneous environments , and the subsequent interest in application adaptation , this paper has proposed and implemented an online metrics collection strategy to assist such adaptation using a mobile object framework and supporting middleware .\nControlled lab studies were conducted to determine worst case performance , as well as show the reduction in collection overhead when applying simple collection criteria .\nIn addition , further testing provided an initial indication of the characteristics of application objects ( based on method execution time ) that would be good candidates for adaptation using the worst case implementation of the proposed metrics collection strategy .\nA key feature of the solution was the specification of multiple configurable criteria to control the propagation of metrics through the system , thereby reducing collection overhead .\nFurthermore , such a temporal history could also facilitate intelligent decisions regarding the collection of metrics since for example a metric that is known to be largely constant need not be frequently measured .\nFuture work will also involve the evaluation of a broad range of adaptation scenarios on the MobJeX framework to quantity the gains that can be made via adaptation through object mobility and thus demonstrate in practise , the efficacy of the solution described in this paper .\nFinally , the authors wish to explore applying the metrics collection concepts described in this paper to a more general and reusable context management system [ 20 ] .", "lvl-2": "Runtime Metrics Collection for Middleware Supported Adaptation of Mobile Applications\nABSTRACT\nThis paper proposes , implements , and evaluates in terms of worst case performance , an online metrics collection strategy to facilitate application adaptation via object mobility using a mobile object framework and supporting middleware .\nThe solution is based upon an abstract representation of the mobile object system , which holds containers aggregating metrics for each specific component including host managers , runtimes and mobile objects .\nA key feature of the solution is the specification of multiple configurable criteria to control the measurement and propagation of metrics through the system .\nThe MobJeX platform was used as the basis for implementation and testing with a number of laboratory tests conducted to measure scalability , efficiency and the application of simple measurement and propagation criteria to reduce collection overhead .\n1 .\nINTRODUCTION\nThe different capabilities of mobile devices , plus the varying speed , error rate and disconnection characteristics of mobile networks [ 1 ] , make it difficult to predict in advance the exact execution environment of mobile applications .\nOne solution which is receiving increasing attention in the research community is application adaptation [ 2-7 ] , in which applications adjust their behaviour in response to factors such as network , processor , or memory usage .\nEffective adaptation requires detailed and up to date information about both the system and the software itself .\nMetrics related to system wide information ( e.g. processor , memory and network load ) are referred to as environmental metrics [ 5 ] , while metrics representing application behaviour are referred as\nsoftware metrics [ 8 ] .\nFurthermore , the type of metrics required for performing adaptation is dependent upon the type of adaptation required .\nFor example , service-based adaptation , in which service quality or service behaviour is modified in response to changes in the runtime environment , generally requires detailed environmental metrics but only simple software metrics [ 4 ] .\nOn the other hand , adaptation via object mobility [ 6 ] , also requires detailed software metrics [ 9 ] since object placement is dependent on the execution characteristics of the mobile objects themselves .\nWith the exception of MobJeX [ 6 ] , existing mobile object systems such as Voyager [ 10 ] , FarGo [ 11 , 12 ] , and JavaParty [ 13 ] do not provide automated adaptation , and therefore lack the metrics collection process required to support this process .\nIn the case of MobJeX , although an adaptation engine has been implemented [ 5 ] , preliminary testing was done using synthetic pre-scripted metrics since there is little prior work on the dynamic collection of software metrics in mobile object frameworks , and no existing means of automatically collecting them .\nConsequently , the main contribution of this paper is a solution for dynamic metrics collection to support adaptation via object mobility for mobile applications .\nThis problem is non-trivial since typical mobile object frameworks consist of multiple application and middleware components , and thus metrics collection must be performed at different locations and the results efficiently propagated to the adaptation engine .\nFurthermore , in some cases the location where each metric should be collected is not fixed ( i.e. it could be done in several places ) and thus a decision must be made based on the efficiency of the chosen solution ( see section 3 ) .\nThe rest of this paper is organised as follows : Section 2 describes the general structure and implementation of mobile object frameworks in order to understand the challenges related to the collection , propagation and delivery of metrics as described in section 3 .\nSection 4 describes some initial testing and results and section 5 closes with a summary , conclusions and discussion of future work .\n2 .\nBACKGROUND\nIn general , an object-oriented application consists of objects collaborating to provide the functionality required by a given problem domain .\nMobile object frameworks allow some of these objects to be tagged as mobile objects , providing middleware support for such objects to be moved at runtime to other hosts .\nAt a minimum , a mobile object framework with at least one running mobile application consists of the following components : runtimes , mobile objects , and proxies [ 14 ] , although the terminology used by individual frameworks can differ [ 6 , 10-13 ] .\nA runtime is a container process for the management of mobile objects .\nFor example , in FarGo [ 15 ] this component is known as a core and in most systems separate runtimes are required to allow different applications to run independently , although this is not the case with MobJeX , which can run multiple applications in a single runtime using threads .\nThe applications themselves comprise mobile objects , which interact with each other through proxies [ 14 ] .\nProxies , which have the same method interface as the object itself but add remote communication and object tracking functionality , are required for each target object that a source object communicates with .\nUpon migration , proxy objects move with the source object .\nThe Java based system MobJeX , which is used as the implementation platform for the metrics collection solution described in this paper , adds a number of additional middleware components .\nFirstly , a host manager ( known as a service in MobJeX ) provides a central point of communication by running on a known port on a per host basis , thus facilitating the enumeration or lookup of components such as runtimes or mobile objects .\nSecondly , MobJeX has a per-application mobile object container called a transport manager ( TM ) .\nAs such the host and transport managers are considered in the solution provided in the next section but could be omitted in the general case .\nFinally , depending on adaptation mode , MobJeX can have a centralised system controller incorporating a global adaptation engine for performing system wide optimisation .\n3 .\nMETRICS COLLECTION\nThis section discusses the design and derivation of a solution for collecting metrics in order to support the adaptation of applications via object migration .\nThe solution , although implemented within the MobJeX framework , is for the most part discussed in generic terms , except where explicitly stated to be MobJeX specific .\n3.1 Metrics Selection\nThe metrics of Ryan and Rossi [ 9 ] have been chosen as the basis for this solution , since they are specifically intended for mobile application adaptation as well as having been derived from a series of mathematical models and empirically validated .\nFurthermore , the metrics were empirically shown to improve the application performance in a real adaptation scenario following a change in the execution environment .\nIt would however be beyond the scope of this paper to implement and test the full suite of metrics listed in [ 9 ] , and thus in order to provide a useful non-random subset , we chose to implement the minimum set of metrics necessary to implement local and global adaptation [ 9 ] and thereby satisfy a range of real adaptation scenarios .\nAs such the solution presented in this section is discussed primarily in terms of these metrics , although the structure of the solution is intended to support the implementation of the remaining metrics , as well as other unspecified metrics such as those related to quality and resource utilisation .\nThis subset is listed below and categorised according to metric type .\nNote that some additional metrics were used for implementation purposes in order to derive core metrics or assist the evaluation , and as such are defined in context where appropriate .\n1 .\nSoftware metrics -- Number of Invocations ( NI ) , the frequency of invocations on methods of a class .\n2 .\nPerformance metrics -- Method Execution Time ( ET ) , the time taken to execute a method body ( ms ) .\n-- Method Invocation Time ( IT ) , the time taken to invoke a\nmethod , excluding the method execution time ( ms ) .\n3 .\nResource utilization metrics\n-- Memory Usage ( MU ) , the memory usage of a process ( in bytes ) .\n-- Processor Usage ( PU ) , the percentage of the CPU load of a host .\n-- Network Usage ( NU ) , the network bandwidth between two hosts ( in bytes/sec ) .\nFollowing are brief examples of a number of these metrics in order to demonstrate their usage in an adaptation scenario .\nAs Processor Usage ( PU ) on a certain host increases , the Execution Time ( ET ) of a given method executed on that host also increases [ 9 ] , thus facilitating the decision of whether to move an object with high ET to another host with low PU .\nInvocation Time ( IT ) shows the overhead of invoking a certain method , with the invocation overhead of marshalling parameters and transmitting remote data for a remote call being orders of magnitude higher than the cost of pushing and popping data from the method call stack .\nIn other words , remote method invocation is expensive and thus should be avoided unless the gains made by moving an object to a host with more processing power ( thereby reducing ET ) outweigh the higher IT of the remote call .\nFinally , Number of Invocations ( NI ) is used primarily as a weighting factor or multiplier in order to enable the adaptation engine to predict the value over time of a particular adaptation decision .\n3.2 Metrics Measurement\nThis subsection discusses how each of the metrics in the subset under investigation can be obtained in terms of either direct measurement or derivation , and where in the mobile object framework such metrics should actually be measured .\nOf the environmental resource metrics , Processor Usage ( PU ) and Network Usage ( NU ) both relate to an individual machine , and thus can be directly measured through the resource monitoring subsystem that is instantiated as part of the MobJeX service .\nHowever , Memory Usage ( MU ) , which represents the memory state of a running process rather than the memory usage of a host , should instead be collected within an individual runtime .\nThe measurement of Number of Invocations ( NI ) and Execution Time ( ET ) metrics can be also be performed via direct measurement , however in this case within the mobile object implementation ( mobject ) itself .\nNI involves simply incrementing a counter value at either the start or end of a method call , depending upon the desired semantics with regard to thrown exceptions , while ET can be measured by starting a timer at the beginning of the method and stopping it at the end of the method , then retrieving the duration recorded by the timer .\nIn contrast , collecting Invocation Time ( IT ) is not as straight forward because the time taken to invoke a method can only be measured after the method finishes its execution and returns to the caller .\nIn order to collect IT metrics , another additional metric is needed .\nRyan and Rossi [ 9 ] define the metric Response Time ( RT ) , as the total time taken for a method call to finish , which is the sum of IT and ET .\nThe Response Time can be measured directly using the same timer based technique used to measure ET , although at the start and end of the proxy call rather than the method implementation .\nOnce the Response Time ( RT ) is known , IT can derived by subtracting RT from ET .\nAlthough this derivation appears simple , in practice it is complicated by the fact that the RT and ET values from which the IT is derived are by necessity measured using timer code in different locations i.e. RT measured in the proxy , ET measured in the method body of the object implementation .\nIn addition , the proxies are by definition not part of the MobJeX containment hierarchy , since although proxies have a reference to their target object , it is not efficient for a mobile object ( mobject ) to have backward references to all of the many proxies which reference it ( one per source object ) .\nFortunately , this problem can be solved using the push based propagation mechanism described in section 3.5 in which the RT metric is pushed to the mobject so that IT can be derived from the ET value stored there .\nThe derived value of IT is then stored and propagated further as necessary according to the criteria of section 3.6 , the structural relationship of which is shown in Figure 1 .\n3.3 Measurement Initiation\nThe polling approach was identified as the most appropriate method for collecting resource utilisation metrics , such as Processor Usage ( PU ) , Network Usage ( NU ) and Memory Usage ( MU ) , since they are not part of , or related to , the direct flow of the application .\nTo measure PU or NU , the resource monitor polls the Operating System for the current CPU or network load respectively .\nIn the case of Memory Usage ( MU ) , the Java Virtual Machine ( JVM ) [ 16 ] is polled for the current memory load .\nNote that in order to minimise the impact on application response time , the polling action should be done asynchronously in a separate thread .\nMetrics that are suitable for application initiated collection ( i.e. as part of a normal method call ) are software and performance related metrics , such as Number of Invocations ( NI ) , Execution Time ( ET ) , and Invocation Time ( IT ) , which are explicitly related to the normal invocation of a method , and thus can be measured directly at this time .\n3.4 Metrics Aggregation\nIn the solution presented in this paper , all metrics collected in the same location are aggregated in a MetricsContainer with individual containers corresponding to functional components in the mobile object framework .\nThe primary advantage of aggregating metrics in containers is that it allows them to be propagated easily as a cohesive unit through the components of the mobility framework so that they can be delivered to the adaptation engine , as discussed in the following subsection .\nNote that this containment captures the different granularity of measurement attributes and their corresponding metrics .\nConsider the case of measuring memory consumption .\nAt a coarse level of granularity this could be measured for an entire application or even a system , but could also be measured at the level of an individual object ; or for an even finer level of granularity , the memory consumption during the execution of a specific method .\nAs an example of the level of granularity required for mobility based adaptation , the local adaptation algorithm proposed by Ryan and Rossi [ 9 ] requires metrics representing both the duration of a method execution and the overhead of a method invocation .\nThe use of metrics containers facilitates the collection of metrics at levels of granularity ranging from a single machine down to the individual method level .\nNote that some metrics containers do not contain any Metric objects , since as previously described , the sample implementation uses only a subset of the adaptation metrics from [ 9 ] .\nHowever , for the sake of consistency and to promote flexibility in terms of adding new metrics in the future , these containers are still considered in the present design for completeness and for future work .\n3.5 Propagation and Delivery of Metrics\nThe solution in this paper identifies two stages in the metrics collection and delivery process .\nFirstly , the propagation of metrics through the components of the mobility framework and secondly , the delivery of those metrics from the host manager/service ( or runtime if the host manager is not present ) to the adaptation engine .\nRegarding propagation , in brief , it is proposed that when a lower level system component detects the arrival of a new metric update ( e.g. mobile object ) , the metric is pushed ( possibly along with other relevant metrics ) to the next level component ( i.e. runtime or transport manager containing the mobile object ) , which at some later stage , again determined by a configurable criteria ( for example when there are a sufficient number of changed mobjects ) will get pushed to the next level component ( i.e. the host manager or the adaptation engine ) .\nA further incentive for treating propagation separately from delivery is due to the distinction between local and global adaptation [ 9 ] .\nLocal adaptation is performed by an engine running on the local host ( for example in MobJeX this would occur within the service ) and thus in this case the delivery phase would be a local inter-process call .\nConversely , global adaptation is handled by a centralised adaptation engine running on a remote host and thus the delivery of metrics is via a remote call , and in the case where multiple runtimes exist without a separate host manager the delivery process would be even more expensive .\nTherefore , due to the presence of network communication latency , it is important for the host manager to pass as many metrics as possible to the adaptation engine in one invocation , implying the need to gather these metrics in the host manager , through some form of push or propagation , before sending them to the adaptation engine .\nConsequently , an abstract representation or model [ 17 ] of the system needs to be maintained .\nSuch a model would contain model entities , corresponding to each of the main system components , connected in a tree like hierarchy , which precisely reflects the structure and containment hierarchy of the actual system .\nAttaching metrics containers to model entities allows a model entity representing a host manager to be delivered to the adaptation engine enabling it to access all metrics in that component and any of its children ( i.e. runtimes , and mobile objects ) .\nFurthermore it would generally be expected that an adaptation engine or system controller would already maintain a model of the system that can not only be reused for propagation but also provides an effective means of delivering metrics information from the host manager to the adaptation engine .\nThe relationship between model entities and metrics containers is captured in Figure 1 .\n3.6 Propagation and Delivery Criteria\nThis subsection proposes flexible criteria to allow each component to decide when it should propagate its metrics to the next component in line ( Figure 1 ) , in order to reduce the overhead incurred when metrics are unnecessarily propagated through the components of the mobility framework and delivered to the adaptation engine .\nThis paper proposes four different types of criterion that are executed at various stages of the measurement and propagation process in order to determine whether the next action should be taken or not .\nThis approach was designed such that whenever a single criterion is not satisfied , the subsequent criteria are not tested .\nThese four criteria are described in the following subsections .\nFigure 1 .\nStructural overview of the hierarchical and criteriabased notification relationships between Metrics , Metrics Containers , and Model Entities\nMeasure Metric Criterion - This criterion is attached to individual Metric objects to decide whether a new metric value should be measured or not .\nThis is most useful in the case where it is expensive to measure a particular metric .\nFurthermore , this criterion can be used as a mechanism for limiting storage requirements and manipulation overhead in the case where metric history is maintained .\nSimple examples would be either time or frequency based whereas more complex criteria could be domain specific for a particular metric , or based upon information stored in the metrics history .\nNotify Metrics Container Criterion - This criterion is also\nattached to individual Metric objects and is used to determine the circumstances under which the Metric object should notify its MetricsContainer .\nThis is based on the assumption that there may be cases where it is desirable to measure and store a metric in the history for the analysis of temporal behaviour , but is not yet significant enough to notify the MetricsContainer for further processing .\nA simple example of this criterion would be threshold based in which the newest metric value is compared with the previously stored value to determine whether the difference is significant enough to be of any interest to the MetricsContainer .\nA more complex criterion could involve analysis of the history to determine whether a pattern of recent changes is significant enough to warrant further processing and possible metrics delivery .\nNotify Model Entity Criterion - Unlike the previous two\ncriteria , this criterion is associated with a MetricsContainer .\nSince a MetricsContainer can have multiple Metric objects , of which it has explicit domain knowledge , it is able to determine if , when , and how many of these metrics should be propagated to the ModelEntity and thus become candidates for being part of the hierarchical ModelEntity push process as described below .\nThis decision making is facilitated by the notifications received from individual Metric objects as described above .\nA simple implementation would be waiting for a certain number of updates before sending a notification to the model entity .\nFor example , since the MobjectMetricsContainer object contains three metrics , a possible criteria would be to check if two or more of the metrics have changed .\nA slightly more advanced implementation can be done by giving each metric a weight to indicate how significant it is in the adaptation decision making process .\nPush Criterion - The push criterion applies to all of the ModelEntites which are containers , that is the TransportManagerModelEntity , RuntimeModelEntity and ServiceModelEntity , as well as the special case of the ProxyMetricsContainer .\nThe purpose of this criterion is twofold .\nFor the TransportManagerModelEntity this serves as a criterion to determine notification since as with the previously described criteria , a local reference is involved .\nFor the other model entities , this serves as an opportunity to determine both when and what metrics should be pushed to the parent container wherein the case of the ServiceModelEntity the parent is the adaptation engine itself or in the case of the ProxyMetricsContainer the target of the push is the MobjectMetricsContainer .\nFurthermore , this criterion is evaluated using information from two sources .\nFirstly , it responds to the notification received from its own MetricsContainer but more importantly it serves to keep track of notifications from its child ModelEntities so as to determine when and what metrics information should be pushed to its parent or target .\nIn the specialised case of the push criterion for the proxy , the decision making is based on both the ProxyMetricsContainer itself , as well as the information accumulated from the individual ProxyMethodMetricsContainers .\nNote that a push criterion is not required for a mobject since it does not have any containment or aggregating responsibilities since this is already\nhandled by the MobjectMetricsContainer and its individual MobjectMethodMetricsContainers .\nAlthough it is always important to reduce the number of pushes , this is especially so from a service to a centralised global adaptation engine , or from a proxy to a mobject .\nThis is because these relationships involve a remote call [ 18 ] which is expensive due to connection setup and data marshalling and unmarshalling overhead , and thus it is more efficient to send a given amount of data in aggregate form rather than sending smaller chunks multiple times .\nA simple implementation for reducing the number of pushes can be done using the concept of a process period [ 19 ] in which case the model entity accumulates pushes from its child entities until the process period expires at which time it pushes the accumulated metrics to its parent .\nAlternatively it could be based on frequency using domain knowledge about the type of children for example when a significant number of mobjects in a particular application ( i.e. TransportManager ) have undergone substantial changes .\nFor reducing the size of pushed data , two types of pushes were considered : shallow push and deep push .\nWith shallow push , a list of metrics containers that contain updated metrics is pushed .\nIn a deep push , the model entity itself is pushed , along with its metrics container and its child entities , which also have reference to metrics containers but possibly unchanged metrics .\nIn the case of the proxy , a deep push involves pushing the ProxyMetricsContainer and all of the ProxyMethodMetricsContainers whereas a shallow push means only the ProxyMethodMetricsContainers that meet a certain criterion .\n4 .\nEVALUATION\nThe preliminary tests presented in this section aim to analyse the performance and scalability of the solution and evaluate the impact on application execution in terms of metrics collection overhead .\nAll tests were executed using two Pentium 4 3.0 GHz PCs with 1,024 MB of RAM , running Java 1.4.2 _ 08 .\nThe two machines were connected to a router with a third computer acting as a file server and hosting the external adaptation engine implemented within the MobJeX system controller , thereby simulating a global adaptation scenario .\nSince only a limited number of tests could be executed , this evaluation chose to measure the worst case scenario in which all metrics collection was initiated in mobjects , wherein the propagation cost is higher than for any other metrics collected in the system .\nIn addition , since exhaustive testing of criteria is beyond the scope of this paper , two different types of criteria were used in the tests .\nThe measure metrics criterion was chosen , since this represents the starting point of the measurement process and can control under what circumstances and how frequently metrics are measured .\nIn addition , the push criterion was also implemented on the service , in order to provide an evaluation of controlling the frequency of metrics delivery to the adaptation engine .\nAll other ( update and push ) criteria were set to `` always '' meaning that they always evaluated to true and thus a notification was posted .\nFigure 2 shows the metric collection overhead in the mobject ( MMCO ) , for different numbers of mobjects and methods when all criteria are set to always to provide the maximum measurement and propagation of metrics and thus an absolute worst case performance scenario .\nIt can be seen that the independent factors of increasing the number of mobjects and methods independently are linear .\nAlthough combining these together provides an exponential growth that is approximately n-squared , the initial results are not discouraging since delivering all of the metrics associated with 20 mobjects , each having 20 methods ( which constitutes quite a large application given that mobjects typically represent coarse grained object clusters ) is approximately 400ms , which could reasonably be expected to be offset with adaptation gains .\nNote that in contrast , the proxy metrics collection overhead ( PMCO ) was relatively small and constant at < 5ms , since in the absence of a proxy push criterion ( this was only implemented on the service ) the response time ( RT ) data for a single method is pushed during every invocation .\nFigure 2 .\nWorst case performance characteristics\nThe next step was to determine the percentage metrics collection overhead compared with execution time in order to provide information about the execution characteristics of objects that would be suitable for adaptation using this metric collection approach .\nClearly , it is not practical to measure metrics and perform adaptation on objects with short execution times that can not benefit from remote execution on hosts with greater processing power , thereby offsetting IT overhead of remote compared with local execution as well as the cost of object migration and the metrics collection process itself .\nIn addition , to demonstrate the effect of using simple frequency based criteria , the MMCO results as a percentage of method execution time were plotted as a 3-dimensional graph in Figure 3 with the z-axis representing the frequency used in both the measure metrics criterion and the service to adaptation engine push criterion .\nThis means that for a frequency value of 5 ( n = 5 ) , metrics are only measured on every fifth method call , which then results in a notification through the model entity hierarchy to the service , on this same fifth invocation .\nFurthermore , the value of n = 5 was also applied to the service push criterion so that metrics were only pushed to the adaptation engine after five such notifications , that is for example five different mobjects had updated their metrics .\nThese results are encouraging since even for the worst case scenario of n = 1 the metric collection overhead is an acceptable 20 % for a method of 1500ms duration ( which is relatively short for a component or service level object in a distributed enterprise class application ) with previous work on adaptation showing that such an overhead could easily be recovered by the efficiency gains made by adaptation [ 5 ] .\nFurthermore , the measurement time includes delivering the results synchronously via a remote call to the adaptation engine on a different host , which would normally be done asynchronously , thus further reducing the impact on method execution performance .\nThe graph also demonstrates that even using modest criteria to reduce the metrics measurement to\nmore realistic levels , has a rapid improvement on collection overhead at 20 % for 500ms of ET .\nFigure 3 .\nPerformance characteristics with simple criteria\n5 .\nSUMMARY AND CONCLUSIONS\nGiven the challenges of developing mobile applications that run in dynamic/heterogeneous environments , and the subsequent interest in application adaptation , this paper has proposed and implemented an online metrics collection strategy to assist such adaptation using a mobile object framework and supporting middleware .\nControlled lab studies were conducted to determine worst case performance , as well as show the reduction in collection overhead when applying simple collection criteria .\nIn addition , further testing provided an initial indication of the characteristics of application objects ( based on method execution time ) that would be good candidates for adaptation using the worst case implementation of the proposed metrics collection strategy .\nA key feature of the solution was the specification of multiple configurable criteria to control the propagation of metrics through the system , thereby reducing collection overhead .\nWhile the potentially efficacy of this approach was tested using simple criteria , given the flexibility of the approach we believe there are many opportunities to significantly reduce collection overhead through the use of more sophisticated criteria .\nOne such approach could be based on maintaining metrics history in order to determine the temporal behaviour of metrics and thus make more intelligent and conservative decisions regarding whether a change in a particular metric is likely to be of interest to the adaptation engine and should thus serve as a basis for notification for inclusion in the next metrics push .\nFurthermore , such a temporal history could also facilitate intelligent decisions regarding the collection of metrics since for example a metric that is known to be largely constant need not be frequently measured .\nFuture work will also involve the evaluation of a broad range of adaptation scenarios on the MobJeX framework to quantity the gains that can be made via adaptation through object mobility and thus demonstrate in practise , the efficacy of the solution described in this paper .\nFinally , the authors wish to explore applying the metrics collection concepts described in this paper to a more general and reusable context management system [ 20 ] ."} {"id": "C-36", "title": "", "abstract": "", "keyphrases": ["secur publish/subscrib system", "distribut access control", "multipl administr domain", "attribut encrypt", "multi-domain", "overal commun overhead", "distribut system-distribut applic", "perform", "encrypt", "congest charg servic", "distribut access control", "administr domain"], "prmu": [], "lvl-1": "Encryption-Enforced Access Control in Dynamic Multi-Domain Publish/Subscribe Networks Lauri I.W. Pesonen University of Cambridge, Computer Laboratory JJ Thomson Avenue, Cambridge, CB3 0FD, UK {first.last}@cl.\ncam.ac.uk David M. Eyers University of Cambridge, Computer Laboratory JJ Thomson Avenue, Cambridge, CB3 0FD, UK {first.last}@cl.\ncam.ac.uk Jean Bacon University of Cambridge, Computer Laboratory JJ Thomson Avenue, Cambridge, CB3 0FD, UK {first.last}@cl.\ncam.ac.uk ABSTRACT Publish/subscribe systems provide an efficient, event-based, wide-area distributed communications infrastructure.\nLarge scale publish/subscribe systems are likely to employ components of the event transport network owned by cooperating, but independent organisations.\nAs the number of participants in the network increases, security becomes an increasing concern.\nThis paper extends previous work to present and evaluate a secure multi-domain publish/subscribe infrastructure that supports and enforces fine-grained access control over the individual attributes of event types.\nKey refresh allows us to ensure forward and backward security when event brokers join and leave the network.\nWe demonstrate that the time and space overheads can be minimised by careful consideration of encryption techniques, and by the use of caching to decrease unnecessary decryptions.\nWe show that our approach has a smaller overall communication overhead than existing approaches for achieving the same degree of control over security in publish/subscribe networks.\nCategories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems-Distributed applications General Terms Security, Performance 1.\nINTRODUCTION Publish/subscribe is well suited as a communication mechanism for building Internet-scale distributed event-driven applications.\nMuch of its capacity for scale in the number of participants comes from its decoupling of publishers and subscribers by placing an asynchronous event delivery service between them.\nIn truly Internet-scale publish/subscribe systems, the event delivery service will include a large set of interconnected broker nodes spanning a wide geographic (and thus network) area.\nHowever, publish/subscribe systems that do span a wide geographic area are likely to also span multiple administrative domains, be they independent administrative domains inside a single organisation, multiple independent organisations, or a combination of the two.\nWhile the communication capabilities of publish/subscribe systems are well proved, spanning multiple administrative domains is likely to require addressing security considerations.\nAs security and access control are almost the antithesis of decoupling, relatively little publish/subscribe research has focused on security so far.\nOur overall research aim is to develop Internet-scale publish/subscribe networks that provide secure, efficient delivery of events, fault-tolerance and self-healing in the delivery infrastructure, and a convenient event interface.\nIn [12] Pesonen et al. propose a multi-domain, capabilitybased access control architecture for publish/subscribe systems.\nThe architecture provides a mechanism for authorising event clients to publish and subscribe to event types.\nThe privileges of the client are checked by the local broker that the client connects to in order to access the publish/ subscribe system.\nThe approach implements access control at the edge of the broker network and assumes that all brokers can be trusted to enforce the access control policies correctly.\nAny malicious, compromised or unauthorised broker is free to read and write any events that pass through it on their way from the publishers to the subscribers.\nThis might be acceptable in a relatively small system deployed inside a single organisation, but it is not appropriate in a multi-domain environment in which organisations share a common infrastructure.\nWe propose enforcing access control within the broker network by encrypting event content, and that policy dictate controls over the necessary encryption keys.\nWith encrypted event content only those brokers that are authorised to ac104 cess the encryption keys are able to access the event content (i.e. publish, subscribe to, or filter).\nWe effectively move the enforcement of access control from the brokers to the encryption key managers.\nWe expect that access control would need to be enforced in a multi-domain publish/subscribe system when multiple organisations form a shared publish/subscribe system yet run multiple independent applications.\nAccess control might also be needed when a single organisation consists of multiple sub-domains that deliver confidential data over the organisation-wide publish/subscribe system.\nBoth cases require access control because event delivery in a dynamic publish/subscribe infrastructure based on a shared broker network may well lead to events being routed through unauthorised domains along their paths from publishers to subscribers.\nThere are two particular benefits to sharing the publish/ subscribe infrastructure, both of which relate to the broker network.\nFirst, sharing brokers will create a physically larger network that will provide greater geographic reach.\nSecond, increasing the inter-connectivity of brokers will allow the publish/subscribe system to provide higher faulttolerance.\nFigure 1 shows the multi-domain publish/subscribe network we use as an example throughout this paper.\nIt is based on the United Kingdom Police Forces, and we show three particular sub-domains: Metropolitan Police Domain.\nThis domain contains a set of CCTV cameras that publish information about the movements of vehicles around the London area.\nWe have included Detective Smith as a subscriber in this domain.\nCongestion Charge Service Domain.\nThe charges that are levied on the vehicles that have passed through the London Congestion Charge zone each day are issued by systems within this domain.\nThe source numberplate recognition data comes from the cameras in the Metropolitan Police Domain.\nThe fact that the CCS are only authorised to read a subset of the vehicle event data will exercise some of the key features of the enforceable publish/subscribe system access control presented in this paper.\nPITO Domain.\nThe Police Information Technology Organisation (PITO) is the centre from which Police data standards are managed.\nIt is the event type owner in this particular scenario.\nEncryption protects the confidentiality of events should they be transported through unauthorised domains.\nHowever encrypting whole events means unauthorised brokers cannot make efficient routing decisions.\nOur approach is to apply encryption to the individual attributes of events.\nThis way our multi-domain access control policy works at a finer granularity - publishers and subscribers may be authorised access to a subset of the available attributes.\nIn cases where non-encrypted events are used for routing, we can reduce the total number of events sent through the system without revealing the values of sensitive attributes.\nIn our example scenario, the Congestion Charge Service would only be authorised to read the numberplate field of vehicle sightings - the location attribute would not be decrypted.\nWe thus preserve the privacy of motorists while still allowing the CCS to do its job using the shared publish/subscribe infrastructure.\nLet us assume that a Metropolitan Police Service detective is investigating a crime and she is interested in sightings of a specific vehicle.\nThe detective gets a court order that authorises her to subscribe to numberplate events of the specific numberplate related to her case.\nCurrent publish/subscribe access control systems enforce security at the edge of the broker network where clients connect to it.\nHowever this approach will often not be acceptable in Internet-scale systems.\nWe propose enforcing security within the broker network as well as at the edges that event clients connect to, by encrypting event content.\nPublications will be encrypted with their event type specific encryption keys.\nBy controlling access to the encryption keys, we can control access to the event types.\nThe proposed approach allows event brokers to route events even when they have access only to a subset of the potential encryption keys.\nWe introduce decentralised publish/subscribe systems and relevant cryptography in Section 2.\nIn Section 3 we present our model for encrypting event content on both the event and the attribute level.\nSection 4 discusses managing encryption keys in multi-domain publish/subscribe systems.\nWe analytically evaluate the performance of our proposal in Section 5.\nFinally Section 6 discusses related work in securing publish/subscribe systems and Section 7 provides concluding remarks.\n2.\nBACKGROUND In this section we provide a brief introduction to decentralised publish/subscribe systems.\nWe indicate our assumptions about multi-domain publish/subscribe systems, and describe how these assumptions influence the developments we have made from our previously published work.\n2.1 Decentralised Publish/Subscribe Systems A publish/subscribe system includes publishers, subscribers, and an event service.\nPublishers publish events, subscribers subscribe to events of interest to them, and the event service is responsible for delivering published events to all subscribers whose interests match the given event.\nThe event service in a decentralised publish/subscribe system is distributed over a number of broker nodes.\nTogether these brokers form a network that is responsible for maintaining the necessary routing paths from publishers to subscribers.\nClients (publishers and subscribers) connect to a local broker, which is fully trusted by the client.\nIn our discussion we refer to the client hosting brokers as publisher hosting brokers (PHB) or subscriber hosting brokers (SHB) depending on whether the connected client is a publisher or 105 IB SHB Sub Pub Pub Sub Sub IB PHB IB IB PHB IB IB IB IB SHB SHB IBIB IB IB IB IB IB IBIB IB TO IB IB IB Metropolitan Police Domain Congestion Charge Service Domain PITO Domain Detective Smith Camera 1 Camera 2 Billing Office Statistics Office Sub Subscriber SHB Subscriber Hosting Broker Pub Publisher PHB Publisher Hosting Broker TO Type Owner IB Intermediate Broker KEY Figure 1: An overall view of our multi-domain publish/subscribe deployment a subscriber, respectively.\nA local broker is usually either part of the same domain as the client, or it is owned by a service provider trusted by the client.\nA broker network can have a static topology (e.g. Siena [3] and Gryphon [14]) or a dynamic topology (e.g. Scribe [4] and Hermes [13]).\nOur proposed approach will work in both cases.\nA static topology enables the system administrator to build trusted domains and in that way improve the efficiency of routing by avoiding unnecessary encryptions (see Sect.\n3.4), which is very difficult with a dynamic topology.\nOn the other hand, a dynamic topology allows the broker network to dynamically re-balance itself when brokers join or leave the network either in a controlled fashion or as a result of a network or node failure.\nOur work is based on the Hermes system.\nHermes is a content-based publish/subscribe middleware that includes strong event type support.\nIn other words, each publication is an instance of a particular predefined event type.\nPublications are type checked at the local broker of each publisher.\nOur attribute level encryption scheme assumes that events are typed.\nHermes uses a structured overlay network as a transport and therefore has a dynamic topology.\nA Hermes publication consists of an event type identifier and a set of attribute value pairs.\nThe type identifier is the SHA-1 hash of the name of the event type.\nIt is used to route the publication through the event broker network.\nIt conveniently hides the type of the publication, i.e. brokers are prevented from seeing which events are flowing through them unless they are aware of the specific event type name and identifier.\n2.2 Secure Event Types Pesonen et al. introduced secure event types in [11], which can have their integrity and authenticity confirmed by checking their digital signatures.\nA useful side effect of secure event types are their globally unique event type and attribute names.\nThese names can be referred to by access control policies.\nIn this paper we use the secure name of the event type or attribute to refer to the encryption key used to encrypt the event or attribute.\n2.3 Capability-Based Access Control Pesonen et al. proposed a capability-based access control architecture for multi-domain publish/subscribe systems in [12].\nThe model treats event types as resources that publishers, subscribers, and event brokers want to access.\nThe event type owner is responsible for managing access control for an event type by issuing Simple Public Key Infrastructure (SPKI) authorisation certificates that grant the holder access to the specified event type.\nFor example, authorised publishers will have been issued an authorisation certificate that specifies that the publisher, identified by public key, is authorised to publish instances of the event type specified in the certificate.\nWe leverage the above mentioned access control mechanism in this paper by controlling access to encryption keys using the same authorisation certificates.\nThat is, a publisher who is authorised to publish a given event type, is also authorised 106 to access the encryption keys used to protect events of that type.\nWe discuss this in more detail in Sect.\n4.\n2.4 Threat model The goal of the proposed mechanism is to enforce access control for authorised participants in the system.\nIn our case the first level of access control is applied when the participant tries to join the publish/subscribe network.\nUnauthorised event brokers are not allowed to join the broker network.\nSimilarly unauthorised event clients are not allowed to connect to an event broker.\nAll the connections in the broker network between event brokers and event clients utilise Transport Layer Security (TLS) [5] in order to prevent unauthorised access on the transport layer.\nThe architecture of the publish/subscribe system means that event clients must connect to event brokers in order to be able to access the publish/subscribe system.\nThus we assume that these clients are not a threat.\nThe event client relies completely on the local event broker for access to the broker network.\nTherefore the event client is unable to access any events without the assistance of the local broker.\nThe brokers on the other hand are able to analyse all events in the system that pass through them.\nA broker can analyse both the event traffic as well as the number and names of attributes that are populated in an event (in the case of attribute level encryption).\nThere are viable approaches to preventing traffic analysis by inserting random events into the event stream in order to produce a uniform traffic pattern.\nSimilarly attribute content can be padded to a standard length in order to avoid leaking information to the adversary.\nWhile traffic analysis is an important concern we have not addressed it further in this paper.\n3.\nENCRYPTING EVENT CONTENT We propose enforcing access control in a decentralised broker network by encrypting the contents of published events and controlling access to the encryption keys.\nEffectively we move the responsibility for access control from the broker network to the key managers.\nIt is assumed that all clients have access to a broker that they can trust and that is authorised to access the event content required by the client.\nThis allows us to implement the event content encryption within the broker network without involving the clients.\nBy delegating the encryption tasks to the brokers, we lower the number of nodes required to have access to a given encryption key1 .\nThe benefits are three-fold: i) fewer nodes handle the confidential encryption key so there is a smaller chance of the key being disclosed; ii) key refreshes involve fewer nodes which means that the key management algorithm will incur smaller communication and processing overheads to the publish/subscribe system; and iii) the local broker will decrypt an event once and deliver it to all subscribers, instead of each subscriber 1 The encryption keys are changed over time in response to brokers joining or leaving the network, and periodically to reduce the amount of time any single key is used.\nThis is discussed in Sect.\n4.2 having to decrypt the same event.\nDelegating encryption tasks to the local broker is appropriate, because encryption is a middleware feature used to enforce access control within the middleware system.\nIf applications need to handle encrypted data in the application layer, they are free to publish encrypted data over the publish/subscribe system.\nWe can implement encryption either at the event level or the attribute level.\nEvent encryption is simpler, requires fewer keys, fewer independent cryptographic operations, and thus is usually faster.\nAttribute encryption enables access control at the attribute level, which means that we have a more expressive and powerful access control mechanism, while usually incurring a larger performance penalty.\nIn this section we discuss encrypting event content both at the event level and the attribute level; avoiding leaking information to unauthorised brokers by encrypting subscription filters; avoiding unnecessary encryptions between authorised brokers; and finally, how event content encryption was implemented in our prototype.\nNote that since no publish/subscribe client is ever given access to encryption keys, any encryption performed by the brokers is necessarily completely transparent to all clients.\n3.1 Event Encryption In event encryption all the event attributes are encrypted as a single block of plaintext.\nThe event type identifier is left intact (i.e. in plaintext) in order to facilitate event routing in the broker network.\nThe globally unique event type identifier specifies the encryption key used to encrypt the event content.\nEach event type in the system will have its own individual encryption key.\nKeys are refreshed, as discussed in Sect.\n4.2.\nWhile in transit the event will consist of a tuple containing the type identifier, a publication timestamp, ciphertext, and a message authentication tag: .\nEvent brokers that are authorised to access the event, and thus have access to the encryption key, can decrypt the event and implement content-based routing.\nEvent brokers that do not have access to the encryption key will be forced to route the event based only on its type.\nThat is, they will not be able to make intelligent decisions about whether events need not be transmitted down their outgoing links.\nEvent encryption results in one encryption at the publisher hosting broker, and one decryption at each filtering intermediate broker and subscriber hosting broker that the event passes through, regardless of the number of attributes.\nThis results in a significant performance advantage compared to attribute encryption.\n3.2 Attribute Encryption In attribute encryption each attribute value in an event is encrypted separately with its own encryption key.\nThe encryption key is identified by the attribute``s globally unique identifier (the globally unique event identifier defines a namespace inside which the attribute identifier is a fully qualified name).\n107 The event type identifier is left intact to facilitate event routing for unauthorised brokers.\nThe attribute identifiers are also left intact to allow authorised brokers to decrypt the attribute values with the correct keys.\nBrokers that are authorised to access some of the attributes in an event, can implement content-based routing over the attributes that are accessible to them.\nAn attribute encrypted event in transit consists of the event type identifier, a publication timestamp, and a set of attribute tuples: .\nAttribute tuples consist of an attribute identifier, ciphertext, and a message authentication tag: .\nThe attribute identifier is the SHA-1 hash of the attribute name used in the event type definition.\nUsing the attribute identifier in the published event instead of the attribute name prevents unauthorised parties from learning which attributes are included in the publication.\nCompared with event encryption, attribute encryption usually results in larger processing overheads, because each attribute is encrypted separately.\nIn the encryption process the initialisation of the encryption algorithm takes a significant portion of the total running time of the algorithm.\nOnce the algorithm is initialised, increasing the amount of data to be encrypted does not affect the running time very much.\nThis disparity is emphasised in attribute encryption, where an encryption algorithm must be initialised for each attribute separately, and the amount of data encrypted is relatively small.\nAs a result attribute encryption incurs larger processing overheads when compared with event encryption which can be clearly seen from the performance results in Sect.\n5.\nThe advantage of attribute encryption is that the type owner is able to control access to the event type at the attribute level.\nThe event type owner can therefore allow clients to have different levels of access to the same event type.\nAlso, attribute level encryption enables content-based routing in cases where an intermediate broker has access only to some of the attributes of the event, thus reducing the overall impact of event delivery on the broker network.\nTherefore the choice between event and attribute encryption is a trade-off between expressiveness and performance, and depends on the requirements of the distributed application.\nThe expressiveness provided by attribute encryption can be emulated by introducing a new event type for each group of subscribers with the same authorisation.\nThe publisher would then publish an instance of each of these types instead of publishing just a combined event.\nFor example, in our London police network, the congestion control cameras would have to publish one event for the CCS and another for the detective.\nThis approach could become difficult to manage if the attributes have a variety of security properties, since a large number of event types would be required and policies and subscriptions may change dynamically.\nThis approach creates a large number of extra events that must be routed through the network, as is shown in Sect.\n5.3.\n3.3 Encrypting Subscriptions In order to fully protect the confidentiality of event content we must also encrypt subscriptions.\nEncrypted subscriptions guarantee: i) that only authorised brokers are able to submit subscriptions to the broker network, and ii) that unauthorised brokers do not gain information about event content by monitoring which subscriptions a given event matches.\nFor example, in the first case an unauthorised broker can create subscriptions with appropriately chosen filters, route them towards the root of the event dissemination tree, and monitor which events were delivered to it as matching the subscription.\nThe fact that the event matched the subscription would leak information to the broker about the event content even if the event was still encrypted.\nIn the second case, even if an unauthorised broker was unable to create subscriptions itself, it could still look at subscriptions that were routed through it, take note of the filters on those subscriptions, and monitor which events are delivered to it by upstream brokers as matching the subscription filters.\nThis would again reveal information about the event content to the unauthorised broker.\nIn the case of encrypting complete events, we also encrypt the complete subscription filter.\nThe event type identifier in the subscription must be left intact to allow brokers to route events based on their topic when they are not authorised to access the filter.\nIn such cases the unauthorised broker is required to assume that events of such a type match all filter expressions.\nEach attribute filter is encrypted individually, much as when encrypting a publication.\nIn addition to the event type identifier the attribute identifiers are also left intact to allow authorised brokers to decrypt those filters that they have access to, and route the event based on its matching the decrypted filters.\n3.4 Avoiding Unnecessary Cryptographic Operations Encrypting the event content is not necessary if the current broker and the next broker down the event dissemination tree have the same credentials with respect to the event type at hand.\nFor example, one can assume that all brokers inside an organisation would share the same credentials and therefore, as long as the next broker is a member of the same domain, the event can be routed to it in plaintext.\nWith attribute encryption it is possible that the neighbouring broker is authorised to access a subset of the decrypted attributes, in which case those attributes that the broker is not authorised to access would be passed to it encrypted.\nIn order to know when it is safe to pass the event in plaintext form, the brokers exchange credentials as part of a handshake when they connect to each other.\nIn cases when the brokers are able to verify each others'' credentials, they will add them to the routing table for future reference.\nIf a broker acquires new credentials after the initial handshake, it will present these new credentials to its neighbours while in session.\nRegardless of its neighbouring brokers, the PHB will always encrypt the event content, because it is cheaper to encrypt the event once at the root of the event dissemination tree.\nIn Hermes the rendezvous node for each event type is selected uniformly randomly (the event type name is hashed with the SHA-1 hash algorithm to produce the event type 108 PHB IBIB IB SHB RN IB SHB Figure 2: Node addressing is evenly distributed across the network, thus rendezvous nodes may lie outside the domain that owns an event type IB IB SHBPHBP S Encrypts Filters from cache Decrypts, delivers Decrypts, filters Plaintext Cached Plaintext (most data) Cached Plaintext (some data) Different domains Cyphertext KEY Figure 3: Caching decrypted data to increase efficiency when delivering to peers with equivalent security privileges identifier, then the identifier is used to select the rendezvous node in the structured overlay network).\nTherefore it is probable that the rendezvous node will reside outside the current domain.\nThis situation is illustrated in the event dissemination tree in Fig. 2.\nSo even with domain internal applications, where the event can be routed from the publisher to all subscribers in plaintext form, the event content will in most cases have to be encrypted for it to be routed to the rendezvous node.\nTo avoid unnecessary decryptions, we attach a plaintext content cache to encrypted events.\nA broker fills the cache with content that it has decrypted, for example, in order to filter on the content.\nThe cache is accessed by the broker when it delivers an event to a local subscriber after first seeing if the event matches the subscription filter, but the broker also sends the cache to the next broker with the encrypted event.\nThe next broker can look the attribute up from the cache instead of having to decrypt it.\nIf the event is being sent to an unauthorised broker, the cache will be discarded before the event is sent.\nObviously sending the cache with the encrypted event will add to the communication cost, but this is outweighed by the saving in encryption/decryption processing.\nIn Fig. 3 we see two separate cached plaintext streams accompanying an event depending on the inter-broker relationships in two different domains.\nWe show in Sect.\n5.2 that the overhead of sending encrypted messages with a full plaintext cache incurs almost no overhead compared to sending plaintext messages.\n3.5 Implementation In our implementation we have used the EAX mode [2] of operation when encrypting events, attributes, and subscription filters.\nEAX is a mode of operation for block ciphers, also called an Authenticated Encryption with Associated Data (AEAD) algorithm that provides simultaneously both data confidentiality and integrity protection.\nThe algorithm implements a two-pass scheme where during the first pass the plain text is encrypted, and on the second pass a message authentication code (MAC) is generated for the encrypted data.\nThe EAX mode is compatible with any block cipher.\nWe decided to use the Advanced Encryption Standard (AES) [9] algorithm in our implementation, because of its standard status and the fact that the algorithm has gone through thorough cryptanalysis during its existence and no serious vulnerabilities have been found thus far.\nIn addition to providing both confidentiality and integrity protection, the EAX mode uses the underlying block cipher in counter mode (CTR mode) [21].\nA block cipher in counter mode is used to produce a stream of key bits that are then XORed with the plaintext.\nEffectively CTR mode transforms a block cipher into a stream cipher.\nThe advantage of stream ciphers is that the ciphertext is the same length as the plaintext, whereas with block ciphers the plaintext must be padded to a multiple of the block cipher``s block length (e.g. the AES block size is 128 bits).\nAvoiding padding is very important in attribute encryption, because the padding might increase the size of the attribute disproportionally.\nFor example, a single integer might be 32 bits in length, which would be padded to 128 bits if we used a block cipher.\nWith event encryption the message expansion is not that relevant, since the length of padding required to reach the next 16 byte multiple will probably be a small proportion of the overall plaintext length.\nIn encryption mode the EAX algorithm takes as input a nonce (a number used once), an encryption key and the plaintext, and it returns the ciphertext and an authentication tag.\nIn decryption mode the algorithm takes as input the encryption key, the ciphertext and the authentication tag, and it returns either the plaintext, or an error if the authentication check failed.\nThe nonce is expanded to the block length of the underlying block cipher by passing it through an OMAC construct (see [7]).\nIt is important that particular nonce values are not reused, otherwise the block cipher in CTR mode would produce an identical key stream.\nIn our implementation we used the PHB defined event timestamp (64-bit value counting the milliseconds since January 1, 1970 UTC) appended by the PHB``s identity (i.e. public key) as the nonce.\nThe broker is responsible for ensuring that the timestamps increase monotonically.\nThe authentication tag is appended to the produced cipher text to create a two-tuple.\nWith event encryption a single tag is created for the encrypted event.\nWith attribute 109 encryption each attribute is encrypted and authenticated separately, and they all have their individual tags.\nThe tag length is configurable in EAX without restrictions, which allows the user to make a trade-off between the authenticity guarantees provided by EAX and the added communication overhead.\nWe used a tag length of 16 bytes in our implementation, but one could make the tag length a publisher/subscriber defined parameter for each publication/subscription or include it in the event type definition to make it a type specific parameter.\nEAX also supports including unencrypted associated data in the tag calculation.\nThe integrity of this data is protected, but it is still readable by everyone.\nThis feature could be used with event encryption in cases where some of the event content is public and thus would be useful for content-based routing.\nThe integrity of the data would still be protected against changes, but unauthorised brokers would be able to apply filters.\nWe have included the event type identifier as associated data in order to protect its integrity.\nOther AEAD algorithms include the offset codebook mode (OCB) [17] and the counter with CBC-MAC mode (CCM) [22].\nContrarily to the EAX mode the OCB mode requires only one pass over the plaintext, which makes it roughly twice as fast as EAX.\nUnfortunately the OCB mode has a patent application in place in the USA, which restricts its use.\nThe CCM mode is the predecessor of the EAX mode.\nIt was developed in order to provide a free alternative to OCB.\nThe EAX was developed later to address some issues with CCM [18].\nSimilarly to EAX, CCM is also a two-pass mode.\n4.\nKEY MANAGEMENT In both encryption approaches the encrypted event content has a globally unique identifier (i.e. the event type or the attribute identifier).\nThat identifier is used to determine the encryption key to use when encrypting or decrypting the content.\nEach event type, in event encryption, and attribute, in attribute encryption, has its own individual encryption key.\nBy controlling access to the encryption key we effectively control access to the encrypted event content.\nIn order to control access to the encryption keys we form a key group of brokers for each individual encryption key.\nThe key group is used to refresh the key when necessary and to deliver the new key to all current members of the key group.\nThe key group manager is responsible for verifying that a new member requesting to join the key group is authorised to do so.\nTherefore the key group manager must be trusted by the type owner to enforce the access control policy.\nWe assume that the key group manager is either a trusted third party or alternatively a member of the type owner``s domain.\nIn [12] Pesonen et al. proposed a capability-based access control architecture for multi-domain publish/subscribe systems.\nThe approach uses capabilities to decentralise the access control policy amongst the publish/subscribe nodes (i.e. clients and brokers): each node holds a set of capabilities that define the authority granted to that node.\nAuthority to access a given event type is granted by the owner of that type issuing a capability to a node.\nThe capability defines the event type, the action, and the attributes that Type Owner ACS Broker Key Manager 1.\nGrant authorisation for Number Platekey 2.\nBroker requests to join Number Plate key group 5.\nIf the broker satisfies all checks,they will begin receiving appropriate keys.\n3.\nKey manager may check broker``s credentials at the Access Control Service 4.\nKey manager may check that the Type Owner permits access Figure 4: The steps involved for a broker to be successful in joining a key group the node is authorised to access.\nFor example, a tuple would authorise the owner to subscribe to Numberplate events with access to all attributes in the published events.\nThe sequence of events required for a broker to successfully join a key group is shown in Fig. 4.\nBoth the client hosting broker and the client must be authorised to make the client``s request.\nThat is, if the client makes a subscription request for Numberplate events, both the client and the local broker must be authorised to subscribe to Numberplate events.\nThis is because from the perspective of the broker network, the local broker acts as a proxy for the client.\nWe use the same capabilities to authorise membership in a key group that are used to authorise publish/subscribe requests.\nNot doing so could lead to the inconsistent situation where a SHB is authorised to make a subscription on behalf of its clients, but is not able to decrypt incoming event content for them.\nIn the Numberplate example above, the local broker holding the above capability is authorised to join the Numberplate key group as well as the key groups for all the attributes in the Numberplate event type.\n4.1 Secure Group Communication Event content encryption in a decentralised multi-domain publish/subscribe system can be seen as a sub-category of secure group communication.\nIn both cases the key management system must scale well with the number of clients, clients might be spread over large geographic areas, there might be high rates of churn in group membership, and all members must be synchronised with each other in time in order to use the same encryption key at the same time.\nThere are a number of scalable key management protocols for secure group communication [15].\nWe have implemented the One-Way Function Tree (OFT) [8] protocol as a proof of concept.\nWe chose to implement OFT, because of its relatively simplicity and good performance.\nOur implementation uses the same structured overlay network used by the broker network as a transport.\nThe OFT protocol is based on a binary tree where the participants are at the leaves of the tree.\nIt scales in log2n in processing and communication costs, as well as in the size of the state stored at each participant, which we have verified in our simulations.\n4.2 Key Refreshing Traditionally in group key management schemes the encryption key is refreshed when a new member joins the group, an 110 existing member leaves the group, or a timer expires.\nRefreshing the key when a new member joins provides backward secrecy, i.e. the new member is prevented from accessing old messages.\nSimilarly refreshing the key when an existing member leaves provides forward secrecy, i.e. the old member is prevented from accessing future messages.\nTimer triggered refreshes are issued periodically in order to limit the damage caused by the current key being compromised.\nEven though the state-of-the-art key management protocols are efficient, refreshing the key unnecessarily introduces extra traffic and processing amongst the key group members.\nIn our case key group membership is based on the broker holding a capability that authorises it to join the key group.\nThe capability has a set of validity conditions that in their simplest form define a time period when the certificate is valid, and in more complex cases involve on-line checks back towards the issuer.\nIn order to avoid unnecessary key refreshes the key manager looks at the certificate validity conditions of the joining or leaving member.\nIn case of a joining member, if the manager can ascertain that the certificate was valid at the time of the previous key refresh, a new key refresh can be avoided.\nSimilarly, instead of refreshing the key immediately when a member leaves the key group, the key manager can cache their credentials and refresh the key only when the credentials expire.\nThese situations are both illustrated in Fig.5.\nIt can be assumed that the credentials granted to brokers are relatively static, i.e. once a domain is authorised to access an event type, the authority will be delegated to all brokers of that domain, and they will have the authority for the foreseeable future.\nMore fine grained and dynamic access control would be implemented at the edge of the broker network between the clients and the client hosting brokers.\nWhen an encryption key is refreshed the new key is tagged with a timestamp.\nThe encryption key to use for a given event is selected based on the event``s publication timestamp.\nThe old keys will be kept for a reasonable amount of time in order to allow for some clock drift.\nSetting this value is part of the key management protocol, although exactly how long this time should be will depend on the nature of the application and possibly the size of the network.\nIt can be configured independently per key group if necessary.\n5.\nEVALUATION In order to evaluate the performance of event content encryption we have implemented both encryption approaches running over our implementation of the Hermes publish/ subscribe middleware.\nThe implementation supports three modes: plaintext content, event encryption, and attribute encryption, in a single publish/subscribe system.\nWe ran three performance tests in a discrete event simulator.\nThe simulator was run on an Intel P4 3.2GHz workstation with 1GB of main memory.\nWe decided to run the tests on an event simulator instead of an actual deployed system in order to be able to measure to aggregate time it takes to handle all messages in the system.\nThe following sections describe the specific test setups and the results in more detail.\n5.1 End-to-End Overhead The end-to-end overhead test shows how much the overall message throughput of the simulator was affected by event content encryption.\nWe formed a broker network with two brokers, attached a publisher to one of them and a subscriber to the other one.\nThe subscriber subscribed to the advertised event type without any filters, i.e. each publication matched the subscriber``s publication and thus was delivered to the subscriber.\nThe test measures the combined time it takes to publish and deliver 100,000 events.\nIf the content is encrypted this includes both encrypting the content at the PHB and decrypting it at the SHB.\nIn the test the number of attributes in the event type is increased from 1 to 25 (the x-axis).\nEach attribute is set to a 30 character string.\nFor each number of attributes in the event type the publisher publishes 100,000 events, and the elapsed time is measured to derive the message throughput.\nThe test was repeated five times for each number of attributes and we use the average of all iterations in the graph, but the results were highly consistent so the standard deviation is not shown.\nThe same tests were run with no content encryption, event encryption, and attribute encryption.\nAs can be seen in Fig. 6, event content encryption introduces a large overhead compared to not using encryption.\nThe throughput when using attribute encryption with an event type with one attribute is 46% of the throughput achieved when events are sent in plaintext.\nWhen the number of attributes increases the performance gap increases as well: with ten attributes the performance with attribute encryption has decreased to 11.7% of plaintext performance.\nEvent encryption fares better, because of fewer encryption operations.\nThe increase in the amount of encrypted data does not affect the performance as much as the number of individual encryption operations does.\nThe difference in performance with event encryption and attribute encryption with only one attribute is caused by the Java object serialisation mechanism: in the event encryption case the whole attribute structure is serialised, which results in more objects than serialising a single attribute value.\nA more efficient implementation would provide its own marshalling mechanism.\nNote that the EAX implementation we use runs the nonce (i.e. initialisation vector) through an OMAC construct to increase its randomness.\nSince the nonce is not required to be kept secret (just unique), there is a potential time/space trade-off we have not yet investigated in attaching extra nonce attributes that have already had this OMAC construct applied to them.\n5.2 Domain Internal Events We explained in Sect.\n3.4 that event content decryption and encryption can be avoided if both brokers are authorised to access the event content.\nThis test was designed to show that the use of the encrypted event content mechanism between two authorised brokers incurs only a small performance overhead.\nIn this test we again form a broker network with two brokers.\n111 Key refresh schedule Broker 1 joining and leaving the key group Broker 2 joining and leaving the key group Actual key refresh times Time One day Broker``s key group credentials are valid Actual join time Actual leave time One day One day Figure 5: How the key refresh schedule is affected by brokers joining and leaving key groups 0 5000 10000 15000 20000 25000 30000 35000 0 5 10 15 20 25 MessagesperSecond Number of Attributes No Encryption Attribute Encryption Whole-content Encryption Figure 6: Throughput of Events in a Simulator Both brokers are configured with the same credentials.\nThe publisher is attached to one of the brokers and the subscriber to the other, and again the subscriber does not specify any filters in its subscription.\nThe publisher publishes 100,000 events and the test measures the elapsed time in order to derive the system``s message throughput.\nThe event content is encrypted outside the timing measurement, i.e. the encryption cost is not included in the measurements.\nThe goal is to model an environment where a broker has received a message from another authorised broker, and it routes the event to a third authorised broker.\nIn this scenario the middle broker does not need to decrypt nor encrypt the event content.\nAs shown in Fig. 2, the elapsed time was measured as the number of attributes in the published event was increased from 1 to 25.\nThe attribute values in each case are 30 character strings.\nEach test is repeated five times, and we use the average of all iterations in the graph.\nThe same test was then repeated with no encryption, event encryption and attribute encryption turned on.\nThe encrypted modes follow each other very closely.\nPredictably, the plaintext mode performs a little better for all attribute counts.\nThe difference can be explained partially by the encrypted events being larger in size, because they include both the plaintext and the encrypted content in this test.\nThe difference in performance is 3.7% with one attribute and 2.5% with 25 attributes.\nWe believe that the roughness of the graphs can be explained by the Java garbage collector interfering with the simulation.\nThe fact that all three graphs show the same irregularities supports this theory.\n112 50000 55000 60000 65000 70000 75000 80000 85000 90000 95000 100000 0 5 10 15 20 25 MessagesperSecond Number of Attributes No Encryption Attribute Encryption Whole-content Encryption Figure 7: Throughput of Domain Internal Events 5.3 Communication Overhead Through the definition of multiple event types, it is possible to emulate the expressiveness of attribute encryption using only event content encryption.\nThe last test we ran was to show the communication overhead caused by this emulation technique, compared to using real attribute encryption.\nIn the test we form a broker network of 2000 brokers.\nWe attach one publisher to one of the brokers, and an increasing number of subscribers to the remaining brokers.\nEach subscriber simulates a group of subscribers that all have the same access rights to the published event.\nEach subscriber group has its own event type in the test.\nThe outcome of this test is shown in Fig. 8.\nThe number of subscriber groups is increased from 1 to 50 (the x-axis).\nFor each n subscriber groups the publisher publishes one event to represent the use of attribute encryption and n events representing the events for each subscriber group.\nWe count the number of hops each publication makes through the broker network (y-axis).\nNote that Fig. 8 shows workloads beyond what we would expect in common usage, in which many event types are likely to contain fewer than ten attributes.\nThe subscriber groups used in this test represent disjoint permission sets over such event attributes.\nThe number of these sets can be determined from the particular access control policy in use, but will be a value less than or equal to the factorial of the number of attributes in a given event type.\nThe graphs indicate that attribute encryption performs better than event encryption even for small numbers of subscriber groups.\nIndeed, with only two subscriber groups (e.g. the case with Numberplate events) the hop count increases from 7.2 hops for attribute encryption to 16.6 hops for event encryption.\nWith 10 subscriber groups the corresponding numbers are 24.2 and 251.0, i.e. an order of magnitude difference.\n6.\nRELATED WORK Wang et al. have categorised the various security issues that need to be addressed in publish/subscribe systems in the future in [20].\nThe paper is a comprehensive overview of security issues in publish/subscribe systems and as such tries to draw attention to the issues rather than providing solutions.\nBacon et al. in [1] examine the use of role-based access control in multi-domain, distributed publish/subscribe systems.\nTheir work is complementary to this paper: distributed RBAC is one potential policy formalism that might use the enforcement mechanisms we have presented.\nOpyrchal and Prakash address the problem of event confidentiality at the last link between the subscriber and the SHB in [10].\nThey correctly state that a secure group communication approach is infeasible in an environment like publish/subscribe that has highly dynamic group memberships.\nAs a solution they propose a scheme utilising key caching and subscriber grouping in order to minimise the number of required encryptions when delivering a publication from a SHB to a set of matching subscribers.\nWe assume in our work that the SHB is powerful enough to man113 1 10 100 1000 10000 0 5 10 15 20 25 30 35 40 45 50 NumberofHopsinTotal Number of Subscription Groups Attribute Encryption Whole-content Encryption Figure 8: Hop Counts When Emulating Attribute Encryption age a TLS secured connection for each local subscriber.\nBoth Srivatsa et al. [19] and Raiciu et al. [16] present mechanisms for protecting the confidentiality of messages in decentralised publish/subscribe infrastructures.\nCompared to our work both papers aim to provide the means for protecting the integrity and confidentiality of messages whereas the goal for our work is to enforce access control inside the broker network.\nRaiciu et al. assume in their work that none of the brokers in the network are trusted and therefore all events are encrypted from publisher to subscriber and that all matching is based on encrypted events.\nIn contrast, we assume that some of the brokers on the path of a publication are trusted to access that publication and are therefore able to implement event matching.\nWe also assume that the publisher and subscriber hosting brokers are always trusted to access the publication.\nThe contributions of Srivatsa et al. and Raiciu et al. are complementary to the contributions in this paper.\nFinally, Fiege et al. address the related topic of event visibility in [6].\nWhile the work concentrated on using scopes as mechanism for structuring large-scale event-based systems, the notion of event visibility does resonate with access control to some extent.\n7.\nCONCLUSIONS Event content encryption can be used to enforce an access control policy while events are in transit in the broker network of a multi-domain publish/subscribe system.\nEncryption causes an overhead, but i) there may be no alternative when access control is required, and ii) the performance penalty can be lessened with implementation optimisations, such as passing cached plaintext content alongside encrypted content between brokers with identical security credentials.\nThis is particularly appropriate if broker-to-broker connections are secured by default so that wire-sniffing is not an issue.\nAttribute level encryption can be implemented in order to enforce fine-grained access control policies.\nIn addition to providing attribute-level access control, attribute encryption enables partially authorised brokers to implement contentbased routing based on the attributes that are accessible to them.\nOur experiments show that i) by caching plaintext and ciphertext content when possible, we are able to deliver comparable performance to plaintext events, and ii) that attribute encryption within an event incurs far less overhead than defining separate event types for the attributes that need different levels of protection.\nIn environments comprising multiple domains, where eventbrokers have different security credentials, we have quantified how a trade-off can be made between performance and expressiveness.\nAcknowledgements We would like to thank the anonymous reviewers for their very helpful comments.\nLauri Pesonen is supported by EPSRC (GR/T28164) and the Nokia Foundation.\nDavid Eyers is supported by EPSRC (GR/S94919).\n114 8.\nREFERENCES [1] J. Bacon, D. M. Eyers, K. Moody, and L. I. W. Pesonen.\nSecuring publish/subscribe for multi-domain systems.\nIn G. Alonso, editor, Middleware, volume 3790 of Lecture Notes in Computer Science, pages 1-20.\nSpringer, 2005.\n[2] M. Bellare, P. Rogaway, and D. Wagner.\nEax: A conventional authenticated-encryption mode.\nCryptology ePrint Archive, Report 2003/069, 2003.\nhttp://eprint.iacr.org/.\n[3] A. Carzaniga, D. S. Rosenblum, and A. L. Wolf.\nDesign and evaluation of a wide-area event notification service.\nACM Transactions on Computer Systems, 19(3):332-383, Aug. 2001.\n[4] M. Castro, P. Druschel, A. Kermarrec, and A. Rowstron.\nSCRIBE: A large-scale and decentralized application-level multicast infrastructure.\nIEEE Journal on Selected Areas in communications (JSAC), 20(8):1489-1499, Oct. 2002.\n[5] T. Dierks and C. Allen.\nThe TLS protocol, version 1.0.\nRFC 2246, Internet Engineering Task Force, Jan. 1999.\n[6] L. Fiege, M. Mezini, G. M uhl, and A. P. Buchmann.\nEngineering event-based systems with scopes.\nIn ECOOP ``02: Proceedings of the 16th European Conference on Object-Oriented Programming, pages 309-333, London, UK, 2002.\nSpringer-Verlag.\n[7] T. Iwata and I. A. Iurosawa.\nOMAC: One-key CBC MAC, Jan. 14 2002.\n[8] D. A. McGrew and A. T. Sherman.\nKey establishment in large dynamic groups using one-way function trees.\nTechnical Report 0755, TIS Labs at Network Associates, Inc., Glenwood, MD, May 1998.\n[9] National Institute of Standards and Technology (NIST).\nAdvanced Encryption Standard (AES).\nFederal Information Processing Standards Publication (FIPS PUB) 197, Nov. 2001.\n[10] L. Opyrchal and A. Prakash.\nSecure distribution of events in content-based publish subscribe systems.\nIn Proc.\nof the 10th USENIX Security Symposium.\nUSENIX, Aug. 2001.\n[11] L. I. W. Pesonen and J. Bacon.\nSecure event types in content-based, multi-domain publish/subscribe systems.\nIn SEM ``05: Proceedings of the 5th international workshop on Software engineering and middleware, pages 98-105, New York, NY, USA, Sept. 2005.\nACM Press.\n[12] L. I. W. Pesonen, D. M. Eyers, and J. Bacon.\nA capabilities-based access control architecture for multi-domain publish/subscribe systems.\nIn Proceedings of the Symposium on Applications and the Internet (SAINT 2006), pages 222-228, Phoenix, AZ, Jan. 2006.\nIEEE.\n[13] P. R. Pietzuch and J. M. Bacon.\nHermes: A distributed event-based middleware architecture.\nIn Proc.\nof the 1st International Workshop on Distributed Event-Based Systems (DEBS``02), pages 611-618, Vienna, Austria, July 2002.\nIEEE.\n[14] P. R. Pietzuch and S. Bhola.\nCongestion control in a reliable scalable message-oriented middleware.\nIn M. Endler and D. Schmidt, editors, Proc.\nof the 4th Int.\nConf.\non Middleware (Middleware ``03), pages 202-221, Rio de Janeiro, Brazil, June 2003.\nSpringer.\n[15] S. Rafaeli and D. Hutchison.\nA survey of key management for secure group communication.\nACM Computing Surveys, 35(3):309-329, 2003.\n[16] C. Raiciu and D. S. Rosenblum.\nEnabling confidentiality in content-based publish/subscribe infrastructures.\nIn Securecomm ``06: Proceedings of the Second IEEE/CreatNet International Conference on Security and Privacy in Communication Networks, 2006.\n[17] P. Rogaway, M. Bellare, J. Black, and T. Krovetz.\nOCB: a block-cipher mode of operation for efficient authenticated encryption.\nIn ACM Conference on Computer and Communications Security, pages 196-205, 2001.\n[18] P. Rogaway and D. Wagner.\nA critique of CCM, Feb. 2003.\n[19] M. Srivatsa and L. Liu.\nSecuring publish-subscribe overlay services with eventguard.\nIn CCS ``05: Proceedings of the 12th ACM conference on Computer and communications security, pages 289-298, New York, NY, USA, 2005.\nACM Press.\n[20] C. Wang, A. Carzaniga, D. Evans, and A. L. Wolf.\nSecurity issues and requirements in internet-scale publish-subscribe systems.\nIn Proc.\nof the 35th Annual Hawaii International Conference on System Sciences (HICSS``02), Big Island, HI, USA, 2002.\nIEEE.\n[21] D. Whitfield and M. Hellman.\nPrivacy and authentication: An introduction to cryptography.\nIn Proceedings of the IEEE, volume 67, pages 397-427, 1979.\n[22] D. Whiting, R. Housley, and N. Ferguson.\nCounter with CBC-MAC (CCM).\nRFC 3610, Internet Engineering Task Force, Sept. 2003.\n115", "lvl-3": "Encryption-Enforced Access Control in Dynamic Multi-Domain Publish/Subscribe Networks\nABSTRACT\nPublish/subscribe systems provide an efficient , event-based , wide-area distributed communications infrastructure .\nLarge scale publish/subscribe systems are likely to employ components of the event transport network owned by cooperating , but independent organisations .\nAs the number of participants in the network increases , security becomes an increasing concern .\nThis paper extends previous work to present and evaluate a secure multi-domain publish/subscribe infrastructure that supports and enforces fine-grained access control over the individual attributes of event types .\nKey refresh allows us to ensure forward and backward security when event brokers join and leave the network .\nWe demonstrate that the time and space overheads can be minimised by careful consideration of encryption techniques , and by the use of caching to decrease unnecessary decryptions .\nWe show that our approach has a smaller overall communication overhead than existing approaches for achieving the same degree of control over security in publish/subscribe networks .\n1 .\nINTRODUCTION\nPublish/subscribe is well suited as a communication mechanism for building Internet-scale distributed event-driven applications .\nMuch of its capacity for scale in the number\nof participants comes from its decoupling of publishers and subscribers by placing an asynchronous event delivery service between them .\nIn truly Internet-scale publish/subscribe systems , the event delivery service will include a large set of interconnected broker nodes spanning a wide geographic ( and thus network ) area .\nHowever , publish/subscribe systems that do span a wide geographic area are likely to also span multiple administrative domains , be they independent administrative domains inside a single organisation , multiple independent organisations , or a combination of the two .\nWhile the communication capabilities of publish/subscribe systems are well proved , spanning multiple administrative domains is likely to require addressing security considerations .\nAs security and access control are almost the antithesis of decoupling , relatively little publish/subscribe research has focused on security so far .\nOur overall research aim is to develop Internet-scale publish/subscribe networks that provide secure , efficient delivery of events , fault-tolerance and self-healing in the delivery infrastructure , and a convenient event interface .\nIn [ 12 ] Pesonen et al. propose a multi-domain , capabilitybased access control architecture for publish/subscribe systems .\nThe architecture provides a mechanism for authorising event clients to publish and subscribe to event types .\nThe privileges of the client are checked by the local broker that the client connects to in order to access the publish / subscribe system .\nThe approach implements access control at the edge of the broker network and assumes that all brokers can be trusted to enforce the access control policies correctly .\nAny malicious , compromised or unauthorised broker is free to read and write any events that pass through it on their way from the publishers to the subscribers .\nThis might be acceptable in a relatively small system deployed inside a single organisation , but it is not appropriate in a multi-domain environment in which organisations share a common infrastructure .\nWe propose enforcing access control within the broker network by encrypting event content , and that policy dictate controls over the necessary encryption keys .\nWith encrypted event content only those brokers that are authorised to ac\ncess the encryption keys are able to access the event content ( i.e. publish , subscribe to , or filter ) .\nWe effectively move the enforcement of access control from the brokers to the encryption key managers .\nWe expect that access control would need to be enforced in a multi-domain publish/subscribe system when multiple organisations form a shared publish/subscribe system yet run multiple independent applications .\nAccess control might also be needed when a single organisation consists of multiple sub-domains that deliver confidential data over the organisation-wide publish/subscribe system .\nBoth cases require access control because event delivery in a dynamic publish/subscribe infrastructure based on a shared broker network may well lead to events being routed through unauthorised domains along their paths from publishers to subscribers .\nThere are two particular benefits to sharing the publish / subscribe infrastructure , both of which relate to the broker network .\nFirst , sharing brokers will create a physically larger network that will provide greater geographic reach .\nSecond , increasing the inter-connectivity of brokers will allow the publish/subscribe system to provide higher faulttolerance .\nFigure 1 shows the multi-domain publish/subscribe network we use as an example throughout this paper .\nIt is based on the United Kingdom Police Forces , and we show three particular sub-domains : Metropolitan Police Domain .\nThis domain contains a set of CCTV cameras that publish information about the movements of vehicles around the London area .\nWe have included Detective Smith as a subscriber in this domain .\nCongestion Charge Service Domain .\nThe charges that are levied on the vehicles that have passed through the London Congestion Charge zone each day are issued by systems within this domain .\nThe source numberplate recognition data comes from the cameras in the Metropolitan Police Domain .\nThe fact that the CCS are only authorised to read a subset of the vehicle event data will exercise some of the key features of the enforceable publish/subscribe system access control presented in this paper .\nPITO Domain .\nThe Police Information Technology Organisation ( PITO ) is the centre from which Police data standards are managed .\nIt is the event type owner in this particular scenario .\nEncryption protects the confidentiality of events should they be transported through unauthorised domains .\nHowever encrypting whole events means unauthorised brokers can not make efficient routing decisions .\nOur approach is to apply encryption to the individual attributes of events .\nThis way our multi-domain access control policy works at a finer granularity -- publishers and subscribers may be authorised access to a subset of the available attributes .\nIn cases where non-encrypted events are used for routing , we can reduce the total number of events sent through the system without revealing the values of sensitive attributes .\nIn our example scenario , the Congestion Charge Service would only be authorised to read the numberplate field of vehicle sightings -- the location attribute would not be decrypted .\nWe thus preserve the privacy of motorists while still allowing the CCS to do its job using the shared publish/subscribe infrastructure .\nLet us assume that a Metropolitan Police Service detective is investigating a crime and she is interested in sightings of a specific vehicle .\nThe detective gets a court order that authorises her to subscribe to numberplate events of the specific numberplate related to her case .\nCurrent publish/subscribe access control systems enforce security at the edge of the broker network where clients connect to it .\nHowever this approach will often not be acceptable in Internet-scale systems .\nWe propose enforcing security within the broker network as well as at the edges that event clients connect to , by encrypting event content .\nPublications will be encrypted with their event type specific encryption keys .\nBy controlling access to the encryption keys , we can control access to the event types .\nThe proposed approach allows event brokers to route events even when they have access only to a subset of the potential encryption keys .\nWe introduce decentralised publish/subscribe systems and relevant cryptography in Section 2 .\nIn Section 3 we present our model for encrypting event content on both the event and the attribute level .\nSection 4 discusses managing encryption keys in multi-domain publish/subscribe systems .\nWe analytically evaluate the performance of our proposal in Section 5 .\nFinally Section 6 discusses related work in securing publish/subscribe systems and Section 7 provides concluding remarks .\n2 .\nBACKGROUND\nIn this section we provide a brief introduction to decentralised publish/subscribe systems .\nWe indicate our assumptions about multi-domain publish/subscribe systems , and describe how these assumptions influence the developments we have made from our previously published work .\n2.1 Decentralised Publish/Subscribe Systems\nA publish/subscribe system includes publishers , subscribers , and an event service .\nPublishers publish events , subscribers subscribe to events of interest to them , and the event service is responsible for delivering published events to all subscribers whose interests match the given event .\nThe event service in a decentralised publish/subscribe system is distributed over a number of broker nodes .\nTogether these brokers form a network that is responsible for maintaining the necessary routing paths from publishers to subscribers .\nClients ( publishers and subscribers ) connect to a local broker , which is fully trusted by the client .\nIn our discussion we refer to the client hosting brokers as publisher hosting brokers ( PHB ) or subscriber hosting brokers ( SHB ) depending on whether the connected client is a publisher or\nFigure 1 : An overall view of our multi-domain publish/subscribe deployment\na subscriber , respectively .\nA local broker is usually either part of the same domain as the client , or it is owned by a service provider trusted by the client .\nA broker network can have a static topology ( e.g. Siena [ 3 ] and Gryphon [ 14 ] ) or a dynamic topology ( e.g. Scribe [ 4 ] and Hermes [ 13 ] ) .\nOur proposed approach will work in both cases .\nA static topology enables the system administrator to build trusted domains and in that way improve the efficiency of routing by avoiding unnecessary encryptions ( see Sect .\n3.4 ) , which is very difficult with a dynamic topology .\nOn the other hand , a dynamic topology allows the broker network to dynamically re-balance itself when brokers join or leave the network either in a controlled fashion or as a result of a network or node failure .\nOur work is based on the Hermes system .\nHermes is a content-based publish/subscribe middleware that includes strong event type support .\nIn other words , each publication is an instance of a particular predefined event type .\nPublications are type checked at the local broker of each publisher .\nOur attribute level encryption scheme assumes that events are typed .\nHermes uses a structured overlay network as a transport and therefore has a dynamic topology .\nA Hermes publication consists of an event type identifier and a set of attribute value pairs .\nThe type identifier is the SHA-1 hash of the name of the event type .\nIt is used to route the publication through the event broker network .\nIt conveniently hides the type of the publication , i.e. brokers are prevented from seeing which events are flowing through them unless they are aware of the specific event type name and identifier .\n2.2 Secure Event Types\nPesonen et al. introduced secure event types in [ 11 ] , which can have their integrity and authenticity confirmed by checking their digital signatures .\nA useful side effect of secure event types are their globally unique event type and attribute names .\nThese names can be referred to by access control policies .\nIn this paper we use the secure name of the event type or attribute to refer to the encryption key used to encrypt the event or attribute .\n2.3 Capability-Based Access Control\nPesonen et al. proposed a capability-based access control architecture for multi-domain publish/subscribe systems in [ 12 ] .\nThe model treats event types as resources that publishers , subscribers , and event brokers want to access .\nThe event type owner is responsible for managing access control for an event type by issuing Simple Public Key Infrastructure ( SPKI ) authorisation certificates that grant the holder access to the specified event type .\nFor example , authorised publishers will have been issued an authorisation certificate that specifies that the publisher , identified by public key , is authorised to publish instances of the event type specified in the certificate .\nWe leverage the above mentioned access control mechanism in this paper by controlling access to encryption keys using the same authorisation certificates .\nThat is , a publisher who is authorised to publish a given event type , is also authorised\nto access the encryption keys used to protect events of that type .\nWe discuss this in more detail in Sect .\n4 .\n2.4 Threat model\nThe goal of the proposed mechanism is to enforce access control for authorised participants in the system .\nIn our case the first level of access control is applied when the participant tries to join the publish/subscribe network .\nUnauthorised event brokers are not allowed to join the broker network .\nSimilarly unauthorised event clients are not allowed to connect to an event broker .\nAll the connections in the broker network between event brokers and event clients utilise Transport Layer Security ( TLS ) [ 5 ] in order to prevent unauthorised access on the transport layer .\nThe architecture of the publish/subscribe system means that event clients must connect to event brokers in order to be able to access the publish/subscribe system .\nThus we assume that these clients are not a threat .\nThe event client relies completely on the local event broker for access to the broker network .\nTherefore the event client is unable to access any events without the assistance of the local broker .\nThe brokers on the other hand are able to analyse all events in the system that pass through them .\nA broker can analyse both the event traffic as well as the number and names of attributes that are populated in an event ( in the case of attribute level encryption ) .\nThere are viable approaches to preventing traffic analysis by inserting random events into the event stream in order to produce a uniform traffic pattern .\nSimilarly attribute content can be padded to a standard length in order to avoid leaking information to the adversary .\nWhile traffic analysis is an important concern we have not addressed it further in this paper .\n3 .\nENCRYPTING EVENT CONTENT\n3.1 Event Encryption\n3.2 Attribute Encryption\n3.3 Encrypting Subscriptions\n3.4 Avoiding Unnecessary Cryptographic Operations\n3.5 Implementation\n4 .\nKEY MANAGEMENT\n4.1 Secure Group Communication\n4.2 Key Refreshing\n5 .\nEVALUATION\n5.1 End-to-End Overhead\n5.2 Domain Internal Events\n5.3 Communication Overhead\n6 .\nRELATED WORK\nWang et al. have categorised the various security issues that need to be addressed in publish/subscribe systems in the future in [ 20 ] .\nThe paper is a comprehensive overview of security issues in publish/subscribe systems and as such tries to draw attention to the issues rather than providing solutions .\nBacon et al. in [ 1 ] examine the use of role-based access control in multi-domain , distributed publish/subscribe systems .\nTheir work is complementary to this paper : distributed RBAC is one potential policy formalism that might use the enforcement mechanisms we have presented .\nOpyrchal and Prakash address the problem of event confidentiality at the last link between the subscriber and the SHB in [ 10 ] .\nThey correctly state that a secure group communication approach is infeasible in an environment like publish/subscribe that has highly dynamic group memberships .\nAs a solution they propose a scheme utilising key caching and subscriber grouping in order to minimise the number of required encryptions when delivering a publication from a SHB to a set of matching subscribers .\nWe assume in our work that the SHB is powerful enough to man\nFigure 8 : Hop Counts When Emulating Attribute Encryption\nage a TLS secured connection for each local subscriber .\nBoth Srivatsa et al. [ 19 ] and Raiciu et al. [ 16 ] present mechanisms for protecting the confidentiality of messages in decentralised publish/subscribe infrastructures .\nCompared to our work both papers aim to provide the means for protecting the integrity and confidentiality of messages whereas the goal for our work is to enforce access control inside the broker network .\nRaiciu et al. assume in their work that none of the brokers in the network are trusted and therefore all events are encrypted from publisher to subscriber and that all matching is based on encrypted events .\nIn contrast , we assume that some of the brokers on the path of a publication are trusted to access that publication and are therefore able to implement event matching .\nWe also assume that the publisher and subscriber hosting brokers are always trusted to access the publication .\nThe contributions of Srivatsa et al. and Raiciu et al. are complementary to the contributions in this paper .\nFinally , Fiege et al. address the related topic of event visibility in [ 6 ] .\nWhile the work concentrated on using scopes as mechanism for structuring large-scale event-based systems , the notion of event visibility does resonate with access control to some extent .\n7 .\nCONCLUSIONS\nEvent content encryption can be used to enforce an access control policy while events are in transit in the broker network of a multi-domain publish/subscribe system .\nEncryption causes an overhead , but i ) there may be no alternative when access control is required , and ii ) the performance penalty can be lessened with implementation optimisations , such as passing cached plaintext content alongside encrypted content between brokers with identical security credentials .\nThis is particularly appropriate if broker-to-broker connections are secured by default so that wire-sniffing is not an issue .\nAttribute level encryption can be implemented in order to enforce fine-grained access control policies .\nIn addition to providing attribute-level access control , attribute encryption enables partially authorised brokers to implement contentbased routing based on the attributes that are accessible to them .\nOur experiments show that i ) by caching plaintext and ciphertext content when possible , we are able to deliver comparable performance to plaintext events , and ii ) that attribute encryption within an event incurs far less overhead than defining separate event types for the attributes that need different levels of protection .\nIn environments comprising multiple domains , where eventbrokers have different security credentials , we have quantified how a trade-off can be made between performance and expressiveness .", "lvl-4": "Encryption-Enforced Access Control in Dynamic Multi-Domain Publish/Subscribe Networks\nABSTRACT\nPublish/subscribe systems provide an efficient , event-based , wide-area distributed communications infrastructure .\nLarge scale publish/subscribe systems are likely to employ components of the event transport network owned by cooperating , but independent organisations .\nAs the number of participants in the network increases , security becomes an increasing concern .\nThis paper extends previous work to present and evaluate a secure multi-domain publish/subscribe infrastructure that supports and enforces fine-grained access control over the individual attributes of event types .\nKey refresh allows us to ensure forward and backward security when event brokers join and leave the network .\nWe demonstrate that the time and space overheads can be minimised by careful consideration of encryption techniques , and by the use of caching to decrease unnecessary decryptions .\nWe show that our approach has a smaller overall communication overhead than existing approaches for achieving the same degree of control over security in publish/subscribe networks .\n1 .\nINTRODUCTION\nPublish/subscribe is well suited as a communication mechanism for building Internet-scale distributed event-driven applications .\nof participants comes from its decoupling of publishers and subscribers by placing an asynchronous event delivery service between them .\nIn truly Internet-scale publish/subscribe systems , the event delivery service will include a large set of interconnected broker nodes spanning a wide geographic ( and thus network ) area .\nWhile the communication capabilities of publish/subscribe systems are well proved , spanning multiple administrative domains is likely to require addressing security considerations .\nAs security and access control are almost the antithesis of decoupling , relatively little publish/subscribe research has focused on security so far .\nOur overall research aim is to develop Internet-scale publish/subscribe networks that provide secure , efficient delivery of events , fault-tolerance and self-healing in the delivery infrastructure , and a convenient event interface .\nIn [ 12 ] Pesonen et al. propose a multi-domain , capabilitybased access control architecture for publish/subscribe systems .\nThe architecture provides a mechanism for authorising event clients to publish and subscribe to event types .\nThe privileges of the client are checked by the local broker that the client connects to in order to access the publish / subscribe system .\nThe approach implements access control at the edge of the broker network and assumes that all brokers can be trusted to enforce the access control policies correctly .\nAny malicious , compromised or unauthorised broker is free to read and write any events that pass through it on their way from the publishers to the subscribers .\nWe propose enforcing access control within the broker network by encrypting event content , and that policy dictate controls over the necessary encryption keys .\nWith encrypted event content only those brokers that are authorised to ac\ncess the encryption keys are able to access the event content ( i.e. publish , subscribe to , or filter ) .\nWe effectively move the enforcement of access control from the brokers to the encryption key managers .\nWe expect that access control would need to be enforced in a multi-domain publish/subscribe system when multiple organisations form a shared publish/subscribe system yet run multiple independent applications .\nAccess control might also be needed when a single organisation consists of multiple sub-domains that deliver confidential data over the organisation-wide publish/subscribe system .\nBoth cases require access control because event delivery in a dynamic publish/subscribe infrastructure based on a shared broker network may well lead to events being routed through unauthorised domains along their paths from publishers to subscribers .\nThere are two particular benefits to sharing the publish / subscribe infrastructure , both of which relate to the broker network .\nFirst , sharing brokers will create a physically larger network that will provide greater geographic reach .\nSecond , increasing the inter-connectivity of brokers will allow the publish/subscribe system to provide higher faulttolerance .\nFigure 1 shows the multi-domain publish/subscribe network we use as an example throughout this paper .\nThis domain contains a set of CCTV cameras that publish information about the movements of vehicles around the London area .\nWe have included Detective Smith as a subscriber in this domain .\nCongestion Charge Service Domain .\nThe charges that are levied on the vehicles that have passed through the London Congestion Charge zone each day are issued by systems within this domain .\nThe source numberplate recognition data comes from the cameras in the Metropolitan Police Domain .\nThe fact that the CCS are only authorised to read a subset of the vehicle event data will exercise some of the key features of the enforceable publish/subscribe system access control presented in this paper .\nPITO Domain .\nIt is the event type owner in this particular scenario .\nEncryption protects the confidentiality of events should they be transported through unauthorised domains .\nHowever encrypting whole events means unauthorised brokers can not make efficient routing decisions .\nOur approach is to apply encryption to the individual attributes of events .\nThis way our multi-domain access control policy works at a finer granularity -- publishers and subscribers may be authorised access to a subset of the available attributes .\nIn cases where non-encrypted events are used for routing , we can reduce the total number of events sent through the system without revealing the values of sensitive attributes .\nWe thus preserve the privacy of motorists while still allowing the CCS to do its job using the shared publish/subscribe infrastructure .\nThe detective gets a court order that authorises her to subscribe to numberplate events of the specific numberplate related to her case .\nCurrent publish/subscribe access control systems enforce security at the edge of the broker network where clients connect to it .\nHowever this approach will often not be acceptable in Internet-scale systems .\nWe propose enforcing security within the broker network as well as at the edges that event clients connect to , by encrypting event content .\nPublications will be encrypted with their event type specific encryption keys .\nBy controlling access to the encryption keys , we can control access to the event types .\nThe proposed approach allows event brokers to route events even when they have access only to a subset of the potential encryption keys .\nWe introduce decentralised publish/subscribe systems and relevant cryptography in Section 2 .\nIn Section 3 we present our model for encrypting event content on both the event and the attribute level .\nSection 4 discusses managing encryption keys in multi-domain publish/subscribe systems .\nFinally Section 6 discusses related work in securing publish/subscribe systems and Section 7 provides concluding remarks .\n2 .\nBACKGROUND\nIn this section we provide a brief introduction to decentralised publish/subscribe systems .\nWe indicate our assumptions about multi-domain publish/subscribe systems , and describe how these assumptions influence the developments we have made from our previously published work .\n2.1 Decentralised Publish/Subscribe Systems\nA publish/subscribe system includes publishers , subscribers , and an event service .\nPublishers publish events , subscribers subscribe to events of interest to them , and the event service is responsible for delivering published events to all subscribers whose interests match the given event .\nThe event service in a decentralised publish/subscribe system is distributed over a number of broker nodes .\nTogether these brokers form a network that is responsible for maintaining the necessary routing paths from publishers to subscribers .\nClients ( publishers and subscribers ) connect to a local broker , which is fully trusted by the client .\nIn our discussion we refer to the client hosting brokers as publisher hosting brokers ( PHB ) or subscriber hosting brokers ( SHB ) depending on whether the connected client is a publisher or\nFigure 1 : An overall view of our multi-domain publish/subscribe deployment\na subscriber , respectively .\nA local broker is usually either part of the same domain as the client , or it is owned by a service provider trusted by the client .\nA broker network can have a static topology ( e.g. Siena [ 3 ] and Gryphon [ 14 ] ) or a dynamic topology ( e.g. Scribe [ 4 ] and Hermes [ 13 ] ) .\nOur proposed approach will work in both cases .\nA static topology enables the system administrator to build trusted domains and in that way improve the efficiency of routing by avoiding unnecessary encryptions ( see Sect .\nOur work is based on the Hermes system .\nHermes is a content-based publish/subscribe middleware that includes strong event type support .\nIn other words , each publication is an instance of a particular predefined event type .\nPublications are type checked at the local broker of each publisher .\nOur attribute level encryption scheme assumes that events are typed .\nHermes uses a structured overlay network as a transport and therefore has a dynamic topology .\nA Hermes publication consists of an event type identifier and a set of attribute value pairs .\nThe type identifier is the SHA-1 hash of the name of the event type .\nIt is used to route the publication through the event broker network .\nIt conveniently hides the type of the publication , i.e. brokers are prevented from seeing which events are flowing through them unless they are aware of the specific event type name and identifier .\n2.2 Secure Event Types\nPesonen et al. introduced secure event types in [ 11 ] , which can have their integrity and authenticity confirmed by checking their digital signatures .\nA useful side effect of secure event types are their globally unique event type and attribute names .\nThese names can be referred to by access control policies .\nIn this paper we use the secure name of the event type or attribute to refer to the encryption key used to encrypt the event or attribute .\n2.3 Capability-Based Access Control\nPesonen et al. proposed a capability-based access control architecture for multi-domain publish/subscribe systems in [ 12 ] .\nThe model treats event types as resources that publishers , subscribers , and event brokers want to access .\nThe event type owner is responsible for managing access control for an event type by issuing Simple Public Key Infrastructure ( SPKI ) authorisation certificates that grant the holder access to the specified event type .\nFor example , authorised publishers will have been issued an authorisation certificate that specifies that the publisher , identified by public key , is authorised to publish instances of the event type specified in the certificate .\nWe leverage the above mentioned access control mechanism in this paper by controlling access to encryption keys using the same authorisation certificates .\nThat is , a publisher who is authorised to publish a given event type , is also authorised\nto access the encryption keys used to protect events of that type .\n4 .\n2.4 Threat model\nThe goal of the proposed mechanism is to enforce access control for authorised participants in the system .\nIn our case the first level of access control is applied when the participant tries to join the publish/subscribe network .\nUnauthorised event brokers are not allowed to join the broker network .\nSimilarly unauthorised event clients are not allowed to connect to an event broker .\nAll the connections in the broker network between event brokers and event clients utilise Transport Layer Security ( TLS ) [ 5 ] in order to prevent unauthorised access on the transport layer .\nThe architecture of the publish/subscribe system means that event clients must connect to event brokers in order to be able to access the publish/subscribe system .\nThus we assume that these clients are not a threat .\nThe event client relies completely on the local event broker for access to the broker network .\nTherefore the event client is unable to access any events without the assistance of the local broker .\nThe brokers on the other hand are able to analyse all events in the system that pass through them .\nA broker can analyse both the event traffic as well as the number and names of attributes that are populated in an event ( in the case of attribute level encryption ) .\nThere are viable approaches to preventing traffic analysis by inserting random events into the event stream in order to produce a uniform traffic pattern .\n6 .\nRELATED WORK\nWang et al. have categorised the various security issues that need to be addressed in publish/subscribe systems in the future in [ 20 ] .\nThe paper is a comprehensive overview of security issues in publish/subscribe systems and as such tries to draw attention to the issues rather than providing solutions .\nBacon et al. in [ 1 ] examine the use of role-based access control in multi-domain , distributed publish/subscribe systems .\nOpyrchal and Prakash address the problem of event confidentiality at the last link between the subscriber and the SHB in [ 10 ] .\nThey correctly state that a secure group communication approach is infeasible in an environment like publish/subscribe that has highly dynamic group memberships .\nWe assume in our work that the SHB is powerful enough to man\nFigure 8 : Hop Counts When Emulating Attribute Encryption\nage a TLS secured connection for each local subscriber .\nBoth Srivatsa et al. [ 19 ] and Raiciu et al. [ 16 ] present mechanisms for protecting the confidentiality of messages in decentralised publish/subscribe infrastructures .\nCompared to our work both papers aim to provide the means for protecting the integrity and confidentiality of messages whereas the goal for our work is to enforce access control inside the broker network .\nRaiciu et al. assume in their work that none of the brokers in the network are trusted and therefore all events are encrypted from publisher to subscriber and that all matching is based on encrypted events .\nIn contrast , we assume that some of the brokers on the path of a publication are trusted to access that publication and are therefore able to implement event matching .\nWe also assume that the publisher and subscriber hosting brokers are always trusted to access the publication .\nFinally , Fiege et al. address the related topic of event visibility in [ 6 ] .\nWhile the work concentrated on using scopes as mechanism for structuring large-scale event-based systems , the notion of event visibility does resonate with access control to some extent .\n7 .\nCONCLUSIONS\nEvent content encryption can be used to enforce an access control policy while events are in transit in the broker network of a multi-domain publish/subscribe system .\nAttribute level encryption can be implemented in order to enforce fine-grained access control policies .\nIn addition to providing attribute-level access control , attribute encryption enables partially authorised brokers to implement contentbased routing based on the attributes that are accessible to them .", "lvl-2": "Encryption-Enforced Access Control in Dynamic Multi-Domain Publish/Subscribe Networks\nABSTRACT\nPublish/subscribe systems provide an efficient , event-based , wide-area distributed communications infrastructure .\nLarge scale publish/subscribe systems are likely to employ components of the event transport network owned by cooperating , but independent organisations .\nAs the number of participants in the network increases , security becomes an increasing concern .\nThis paper extends previous work to present and evaluate a secure multi-domain publish/subscribe infrastructure that supports and enforces fine-grained access control over the individual attributes of event types .\nKey refresh allows us to ensure forward and backward security when event brokers join and leave the network .\nWe demonstrate that the time and space overheads can be minimised by careful consideration of encryption techniques , and by the use of caching to decrease unnecessary decryptions .\nWe show that our approach has a smaller overall communication overhead than existing approaches for achieving the same degree of control over security in publish/subscribe networks .\n1 .\nINTRODUCTION\nPublish/subscribe is well suited as a communication mechanism for building Internet-scale distributed event-driven applications .\nMuch of its capacity for scale in the number\nof participants comes from its decoupling of publishers and subscribers by placing an asynchronous event delivery service between them .\nIn truly Internet-scale publish/subscribe systems , the event delivery service will include a large set of interconnected broker nodes spanning a wide geographic ( and thus network ) area .\nHowever , publish/subscribe systems that do span a wide geographic area are likely to also span multiple administrative domains , be they independent administrative domains inside a single organisation , multiple independent organisations , or a combination of the two .\nWhile the communication capabilities of publish/subscribe systems are well proved , spanning multiple administrative domains is likely to require addressing security considerations .\nAs security and access control are almost the antithesis of decoupling , relatively little publish/subscribe research has focused on security so far .\nOur overall research aim is to develop Internet-scale publish/subscribe networks that provide secure , efficient delivery of events , fault-tolerance and self-healing in the delivery infrastructure , and a convenient event interface .\nIn [ 12 ] Pesonen et al. propose a multi-domain , capabilitybased access control architecture for publish/subscribe systems .\nThe architecture provides a mechanism for authorising event clients to publish and subscribe to event types .\nThe privileges of the client are checked by the local broker that the client connects to in order to access the publish / subscribe system .\nThe approach implements access control at the edge of the broker network and assumes that all brokers can be trusted to enforce the access control policies correctly .\nAny malicious , compromised or unauthorised broker is free to read and write any events that pass through it on their way from the publishers to the subscribers .\nThis might be acceptable in a relatively small system deployed inside a single organisation , but it is not appropriate in a multi-domain environment in which organisations share a common infrastructure .\nWe propose enforcing access control within the broker network by encrypting event content , and that policy dictate controls over the necessary encryption keys .\nWith encrypted event content only those brokers that are authorised to ac\ncess the encryption keys are able to access the event content ( i.e. publish , subscribe to , or filter ) .\nWe effectively move the enforcement of access control from the brokers to the encryption key managers .\nWe expect that access control would need to be enforced in a multi-domain publish/subscribe system when multiple organisations form a shared publish/subscribe system yet run multiple independent applications .\nAccess control might also be needed when a single organisation consists of multiple sub-domains that deliver confidential data over the organisation-wide publish/subscribe system .\nBoth cases require access control because event delivery in a dynamic publish/subscribe infrastructure based on a shared broker network may well lead to events being routed through unauthorised domains along their paths from publishers to subscribers .\nThere are two particular benefits to sharing the publish / subscribe infrastructure , both of which relate to the broker network .\nFirst , sharing brokers will create a physically larger network that will provide greater geographic reach .\nSecond , increasing the inter-connectivity of brokers will allow the publish/subscribe system to provide higher faulttolerance .\nFigure 1 shows the multi-domain publish/subscribe network we use as an example throughout this paper .\nIt is based on the United Kingdom Police Forces , and we show three particular sub-domains : Metropolitan Police Domain .\nThis domain contains a set of CCTV cameras that publish information about the movements of vehicles around the London area .\nWe have included Detective Smith as a subscriber in this domain .\nCongestion Charge Service Domain .\nThe charges that are levied on the vehicles that have passed through the London Congestion Charge zone each day are issued by systems within this domain .\nThe source numberplate recognition data comes from the cameras in the Metropolitan Police Domain .\nThe fact that the CCS are only authorised to read a subset of the vehicle event data will exercise some of the key features of the enforceable publish/subscribe system access control presented in this paper .\nPITO Domain .\nThe Police Information Technology Organisation ( PITO ) is the centre from which Police data standards are managed .\nIt is the event type owner in this particular scenario .\nEncryption protects the confidentiality of events should they be transported through unauthorised domains .\nHowever encrypting whole events means unauthorised brokers can not make efficient routing decisions .\nOur approach is to apply encryption to the individual attributes of events .\nThis way our multi-domain access control policy works at a finer granularity -- publishers and subscribers may be authorised access to a subset of the available attributes .\nIn cases where non-encrypted events are used for routing , we can reduce the total number of events sent through the system without revealing the values of sensitive attributes .\nIn our example scenario , the Congestion Charge Service would only be authorised to read the numberplate field of vehicle sightings -- the location attribute would not be decrypted .\nWe thus preserve the privacy of motorists while still allowing the CCS to do its job using the shared publish/subscribe infrastructure .\nLet us assume that a Metropolitan Police Service detective is investigating a crime and she is interested in sightings of a specific vehicle .\nThe detective gets a court order that authorises her to subscribe to numberplate events of the specific numberplate related to her case .\nCurrent publish/subscribe access control systems enforce security at the edge of the broker network where clients connect to it .\nHowever this approach will often not be acceptable in Internet-scale systems .\nWe propose enforcing security within the broker network as well as at the edges that event clients connect to , by encrypting event content .\nPublications will be encrypted with their event type specific encryption keys .\nBy controlling access to the encryption keys , we can control access to the event types .\nThe proposed approach allows event brokers to route events even when they have access only to a subset of the potential encryption keys .\nWe introduce decentralised publish/subscribe systems and relevant cryptography in Section 2 .\nIn Section 3 we present our model for encrypting event content on both the event and the attribute level .\nSection 4 discusses managing encryption keys in multi-domain publish/subscribe systems .\nWe analytically evaluate the performance of our proposal in Section 5 .\nFinally Section 6 discusses related work in securing publish/subscribe systems and Section 7 provides concluding remarks .\n2 .\nBACKGROUND\nIn this section we provide a brief introduction to decentralised publish/subscribe systems .\nWe indicate our assumptions about multi-domain publish/subscribe systems , and describe how these assumptions influence the developments we have made from our previously published work .\n2.1 Decentralised Publish/Subscribe Systems\nA publish/subscribe system includes publishers , subscribers , and an event service .\nPublishers publish events , subscribers subscribe to events of interest to them , and the event service is responsible for delivering published events to all subscribers whose interests match the given event .\nThe event service in a decentralised publish/subscribe system is distributed over a number of broker nodes .\nTogether these brokers form a network that is responsible for maintaining the necessary routing paths from publishers to subscribers .\nClients ( publishers and subscribers ) connect to a local broker , which is fully trusted by the client .\nIn our discussion we refer to the client hosting brokers as publisher hosting brokers ( PHB ) or subscriber hosting brokers ( SHB ) depending on whether the connected client is a publisher or\nFigure 1 : An overall view of our multi-domain publish/subscribe deployment\na subscriber , respectively .\nA local broker is usually either part of the same domain as the client , or it is owned by a service provider trusted by the client .\nA broker network can have a static topology ( e.g. Siena [ 3 ] and Gryphon [ 14 ] ) or a dynamic topology ( e.g. Scribe [ 4 ] and Hermes [ 13 ] ) .\nOur proposed approach will work in both cases .\nA static topology enables the system administrator to build trusted domains and in that way improve the efficiency of routing by avoiding unnecessary encryptions ( see Sect .\n3.4 ) , which is very difficult with a dynamic topology .\nOn the other hand , a dynamic topology allows the broker network to dynamically re-balance itself when brokers join or leave the network either in a controlled fashion or as a result of a network or node failure .\nOur work is based on the Hermes system .\nHermes is a content-based publish/subscribe middleware that includes strong event type support .\nIn other words , each publication is an instance of a particular predefined event type .\nPublications are type checked at the local broker of each publisher .\nOur attribute level encryption scheme assumes that events are typed .\nHermes uses a structured overlay network as a transport and therefore has a dynamic topology .\nA Hermes publication consists of an event type identifier and a set of attribute value pairs .\nThe type identifier is the SHA-1 hash of the name of the event type .\nIt is used to route the publication through the event broker network .\nIt conveniently hides the type of the publication , i.e. brokers are prevented from seeing which events are flowing through them unless they are aware of the specific event type name and identifier .\n2.2 Secure Event Types\nPesonen et al. introduced secure event types in [ 11 ] , which can have their integrity and authenticity confirmed by checking their digital signatures .\nA useful side effect of secure event types are their globally unique event type and attribute names .\nThese names can be referred to by access control policies .\nIn this paper we use the secure name of the event type or attribute to refer to the encryption key used to encrypt the event or attribute .\n2.3 Capability-Based Access Control\nPesonen et al. proposed a capability-based access control architecture for multi-domain publish/subscribe systems in [ 12 ] .\nThe model treats event types as resources that publishers , subscribers , and event brokers want to access .\nThe event type owner is responsible for managing access control for an event type by issuing Simple Public Key Infrastructure ( SPKI ) authorisation certificates that grant the holder access to the specified event type .\nFor example , authorised publishers will have been issued an authorisation certificate that specifies that the publisher , identified by public key , is authorised to publish instances of the event type specified in the certificate .\nWe leverage the above mentioned access control mechanism in this paper by controlling access to encryption keys using the same authorisation certificates .\nThat is , a publisher who is authorised to publish a given event type , is also authorised\nto access the encryption keys used to protect events of that type .\nWe discuss this in more detail in Sect .\n4 .\n2.4 Threat model\nThe goal of the proposed mechanism is to enforce access control for authorised participants in the system .\nIn our case the first level of access control is applied when the participant tries to join the publish/subscribe network .\nUnauthorised event brokers are not allowed to join the broker network .\nSimilarly unauthorised event clients are not allowed to connect to an event broker .\nAll the connections in the broker network between event brokers and event clients utilise Transport Layer Security ( TLS ) [ 5 ] in order to prevent unauthorised access on the transport layer .\nThe architecture of the publish/subscribe system means that event clients must connect to event brokers in order to be able to access the publish/subscribe system .\nThus we assume that these clients are not a threat .\nThe event client relies completely on the local event broker for access to the broker network .\nTherefore the event client is unable to access any events without the assistance of the local broker .\nThe brokers on the other hand are able to analyse all events in the system that pass through them .\nA broker can analyse both the event traffic as well as the number and names of attributes that are populated in an event ( in the case of attribute level encryption ) .\nThere are viable approaches to preventing traffic analysis by inserting random events into the event stream in order to produce a uniform traffic pattern .\nSimilarly attribute content can be padded to a standard length in order to avoid leaking information to the adversary .\nWhile traffic analysis is an important concern we have not addressed it further in this paper .\n3 .\nENCRYPTING EVENT CONTENT\nWe propose enforcing access control in a decentralised broker network by encrypting the contents of published events and controlling access to the encryption keys .\nEffectively we move the responsibility for access control from the broker network to the key managers .\nIt is assumed that all clients have access to a broker that they can trust and that is authorised to access the event content required by the client .\nThis allows us to implement the event content encryption within the broker network without involving the clients .\nBy delegating the encryption tasks to the brokers , we lower the number of nodes required to have access to a given encryption key ' .\nThe benefits are three-fold : i ) fewer nodes handle the confidential encryption key so there is a smaller chance of the key being disclosed ; ii ) key refreshes involve fewer nodes which means that the key management algorithm will incur smaller communication and processing overheads to the publish/subscribe system ; and iii ) the local broker will decrypt an event once and deliver it to all subscribers , instead of each subscriber ' The encryption keys are changed over time in response to brokers joining or leaving the network , and periodically to reduce the amount of time any single key is used .\nThis is discussed in Sect .\n4.2 having to decrypt the same event .\nDelegating encryption tasks to the local broker is appropriate , because encryption is a middleware feature used to enforce access control within the middleware system .\nIf applications need to handle encrypted data in the application layer , they are free to publish encrypted data over the publish/subscribe system .\nWe can implement encryption either at the event level or the attribute level .\nEvent encryption is simpler , requires fewer keys , fewer independent cryptographic operations , and thus is usually faster .\nAttribute encryption enables access control at the attribute level , which means that we have a more expressive and powerful access control mechanism , while usually incurring a larger performance penalty .\nIn this section we discuss encrypting event content both at the event level and the attribute level ; avoiding leaking information to unauthorised brokers by encrypting subscription filters ; avoiding unnecessary encryptions between authorised brokers ; and finally , how event content encryption was implemented in our prototype .\nNote that since no publish/subscribe client is ever given access to encryption keys , any encryption performed by the brokers is necessarily completely transparent to all clients .\n3.1 Event Encryption\nIn event encryption all the event attributes are encrypted as a single block of plaintext .\nThe event type identifier is left intact ( i.e. in plaintext ) in order to facilitate event routing in the broker network .\nThe globally unique event type identifier specifies the encryption key used to encrypt the event content .\nEach event type in the system will have its own individual encryption key .\nKeys are refreshed , as discussed in Sect .\n4.2 .\nWhile in transit the event will consist of a tuple containing the type identifier , a publication timestamp , ciphertext , and a message authentication tag : < type id , timestamp , cipher text , authentication tag > .\nEvent brokers that are authorised to access the event , and thus have access to the encryption key , can decrypt the event and implement content-based routing .\nEvent brokers that do not have access to the encryption key will be forced to route the event based only on its type .\nThat is , they will not be able to make intelligent decisions about whether events need not be transmitted down their outgoing links .\nEvent encryption results in one encryption at the publisher hosting broker , and one decryption at each filtering intermediate broker and subscriber hosting broker that the event passes through , regardless of the number of attributes .\nThis results in a significant performance advantage compared to attribute encryption .\n3.2 Attribute Encryption\nIn attribute encryption each attribute value in an event is encrypted separately with its own encryption key .\nThe encryption key is identified by the attribute 's globally unique identifier ( the globally unique event identifier defines a namespace inside which the attribute identifier is a fully qualified name ) .\nThe event type identifier is left intact to facilitate event routing for unauthorised brokers .\nThe attribute identifiers are also left intact to allow authorised brokers to decrypt the attribute values with the correct keys .\nBrokers that are authorised to access some of the attributes in an event , can implement content-based routing over the attributes that are accessible to them .\nAn attribute encrypted event in transit consists of the event type identifier , a publication timestamp , and a set of attribute tuples : < type id , timestamp , attributes > .\nAttribute tuples consist of an attribute identifier , ciphertext , and a message authentication tag : < attr id , ciphertext , authentication tag > .\nThe attribute identifier is the SHA-1 hash of the attribute name used in the event type definition .\nUsing the attribute identifier in the published event instead of the attribute name prevents unauthorised parties from learning which attributes are included in the publication .\nCompared with event encryption , attribute encryption usually results in larger processing overheads , because each attribute is encrypted separately .\nIn the encryption process the initialisation of the encryption algorithm takes a significant portion of the total running time of the algorithm .\nOnce the algorithm is initialised , increasing the amount of data to be encrypted does not affect the running time very much .\nThis disparity is emphasised in attribute encryption , where an encryption algorithm must be initialised for each attribute separately , and the amount of data encrypted is relatively small .\nAs a result attribute encryption incurs larger processing overheads when compared with event encryption which can be clearly seen from the performance results in Sect .\n5 .\nThe advantage of attribute encryption is that the type owner is able to control access to the event type at the attribute level .\nThe event type owner can therefore allow clients to have different levels of access to the same event type .\nAlso , attribute level encryption enables content-based routing in cases where an intermediate broker has access only to some of the attributes of the event , thus reducing the overall impact of event delivery on the broker network .\nTherefore the choice between event and attribute encryption is a trade-off between expressiveness and performance , and depends on the requirements of the distributed application .\nThe expressiveness provided by attribute encryption can be emulated by introducing a new event type for each group of subscribers with the same authorisation .\nThe publisher would then publish an instance of each of these types instead of publishing just a combined event .\nFor example , in our London police network , the congestion control cameras would have to publish one event for the CCS and another for the detective .\nThis approach could become difficult to manage if the attributes have a variety of security properties , since a large number of event types would be required and policies and subscriptions may change dynamically .\nThis approach creates a large number of extra events that must be routed through the network , as is shown in Sect .\n5.3 .\n3.3 Encrypting Subscriptions\nIn order to fully protect the confidentiality of event content we must also encrypt subscriptions .\nEncrypted subscriptions guarantee : i ) that only authorised brokers are able to submit subscriptions to the broker network , and ii ) that unauthorised brokers do not gain information about event content by monitoring which subscriptions a given event matches .\nFor example , in the first case an unauthorised broker can create subscriptions with appropriately chosen filters , route them towards the root of the event dissemination tree , and monitor which events were delivered to it as matching the subscription .\nThe fact that the event matched the subscription would leak information to the broker about the event content even if the event was still encrypted .\nIn the second case , even if an unauthorised broker was unable to create subscriptions itself , it could still look at subscriptions that were routed through it , take note of the filters on those subscriptions , and monitor which events are delivered to it by upstream brokers as matching the subscription filters .\nThis would again reveal information about the event content to the unauthorised broker .\nIn the case of encrypting complete events , we also encrypt the complete subscription filter .\nThe event type identifier in the subscription must be left intact to allow brokers to route events based on their topic when they are not authorised to access the filter .\nIn such cases the unauthorised broker is required to assume that events of such a type match all filter expressions .\nEach attribute filter is encrypted individually , much as when encrypting a publication .\nIn addition to the event type identifier the attribute identifiers are also left intact to allow authorised brokers to decrypt those filters that they have access to , and route the event based on its matching the decrypted filters .\n3.4 Avoiding Unnecessary Cryptographic Operations\nEncrypting the event content is not necessary if the current broker and the next broker down the event dissemination tree have the same credentials with respect to the event type at hand .\nFor example , one can assume that all brokers inside an organisation would share the same credentials and therefore , as long as the next broker is a member of the same domain , the event can be routed to it in plaintext .\nWith attribute encryption it is possible that the neighbouring broker is authorised to access a subset of the decrypted attributes , in which case those attributes that the broker is not authorised to access would be passed to it encrypted .\nIn order to know when it is safe to pass the event in plaintext form , the brokers exchange credentials as part of a handshake when they connect to each other .\nIn cases when the brokers are able to verify each others ' credentials , they will add them to the routing table for future reference .\nIf a broker acquires new credentials after the initial handshake , it will present these new credentials to its neighbours while in session .\nRegardless of its neighbouring brokers , the PHB will always encrypt the event content , because it is cheaper to encrypt the event once at the root of the event dissemination tree .\nIn Hermes the rendezvous node for each event type is selected uniformly randomly ( the event type name is hashed with the SHA-1 hash algorithm to produce the event type\nFigure 2 : Node addressing is evenly distributed across the network , thus rendezvous nodes may lie outside the domain that owns an event type Figure 3 : Caching decrypted data to increase effi\nciency when delivering to peers with equivalent security privileges identifier , then the identifier is used to select the rendezvous node in the structured overlay network ) .\nTherefore it is probable that the rendezvous node will reside outside the current domain .\nThis situation is illustrated in the event dissemination tree in Fig. 2 .\nSo even with domain internal applications , where the event can be routed from the publisher to all subscribers in plaintext form , the event content will in most cases have to be encrypted for it to be routed to the rendezvous node .\nTo avoid unnecessary decryptions , we attach a plaintext content cache to encrypted events .\nA broker fills the cache with content that it has decrypted , for example , in order to filter on the content .\nThe cache is accessed by the broker when it delivers an event to a local subscriber after first seeing if the event matches the subscription filter , but the broker also sends the cache to the next broker with the encrypted event .\nThe next broker can look the attribute up from the cache instead of having to decrypt it .\nIf the event is being sent to an unauthorised broker , the cache will be discarded before the event is sent .\nObviously sending the cache with the encrypted event will add to the communication cost , but this is outweighed by the saving in encryption/decryption processing .\nIn Fig. 3 we see two separate cached plaintext streams accompanying an event depending on the inter-broker relationships in two different domains .\nWe show in Sect .\n5.2 that the overhead of sending encrypted messages with a full plaintext cache incurs almost no overhead compared to sending plaintext messages .\n3.5 Implementation\nIn our implementation we have used the EAX mode [ 2 ] of operation when encrypting events , attributes , and subscription filters .\nEAX is a mode of operation for block ciphers , also called an Authenticated Encryption with Associated Data ( AEAD ) algorithm that provides simultaneously both data confidentiality and integrity protection .\nThe algorithm implements a two-pass scheme where during the first pass the plain text is encrypted , and on the second pass a message authentication code ( MAC ) is generated for the encrypted data .\nThe EAX mode is compatible with any block cipher .\nWe decided to use the Advanced Encryption Standard ( AES ) [ 9 ] algorithm in our implementation , because of its standard status and the fact that the algorithm has gone through thorough cryptanalysis during its existence and no serious vulnerabilities have been found thus far .\nIn addition to providing both confidentiality and integrity protection , the EAX mode uses the underlying block cipher in counter mode ( CTR mode ) [ 21 ] .\nA block cipher in counter mode is used to produce a stream of key bits that are then XORed with the plaintext .\nEffectively CTR mode transforms a block cipher into a stream cipher .\nThe advantage of stream ciphers is that the ciphertext is the same length as the plaintext , whereas with block ciphers the plaintext must be padded to a multiple of the block cipher 's block length ( e.g. the AES block size is 128 bits ) .\nAvoiding padding is very important in attribute encryption , because the padding might increase the size of the attribute disproportionally .\nFor example , a single integer might be 32 bits in length , which would be padded to 128 bits if we used a block cipher .\nWith event encryption the message expansion is not that relevant , since the length of padding required to reach the next 16 byte multiple will probably be a small proportion of the overall plaintext length .\nIn encryption mode the EAX algorithm takes as input a nonce ( a number used once ) , an encryption key and the plaintext , and it returns the ciphertext and an authentication tag .\nIn decryption mode the algorithm takes as input the encryption key , the ciphertext and the authentication tag , and it returns either the plaintext , or an error if the authentication check failed .\nThe nonce is expanded to the block length of the underlying block cipher by passing it through an OMAC construct ( see [ 7 ] ) .\nIt is important that particular nonce values are not reused , otherwise the block cipher in CTR mode would produce an identical key stream .\nIn our implementation we used the PHB defined event timestamp ( 64-bit value counting the milliseconds since January 1 , 1970 UTC ) appended by the PHB 's identity ( i.e. public key ) as the nonce .\nThe broker is responsible for ensuring that the timestamps increase monotonically .\nThe authentication tag is appended to the produced cipher text to create a two-tuple .\nWith event encryption a single tag is created for the encrypted event .\nWith attribute\nencryption each attribute is encrypted and authenticated separately , and they all have their individual tags .\nThe tag length is configurable in EAX without restrictions , which allows the user to make a trade-off between the authenticity guarantees provided by EAX and the added communication overhead .\nWe used a tag length of 16 bytes in our implementation , but one could make the tag length a publisher/subscriber defined parameter for each publication/subscription or include it in the event type definition to make it a type specific parameter .\nEAX also supports including unencrypted associated data in the tag calculation .\nThe integrity of this data is protected , but it is still readable by everyone .\nThis feature could be used with event encryption in cases where some of the event content is public and thus would be useful for content-based routing .\nThe integrity of the data would still be protected against changes , but unauthorised brokers would be able to apply filters .\nWe have included the event type identifier as associated data in order to protect its integrity .\nOther AEAD algorithms include the offset codebook mode ( OCB ) [ 17 ] and the counter with CBC-MAC mode ( CCM ) [ 22 ] .\nContrarily to the EAX mode the OCB mode requires only one pass over the plaintext , which makes it roughly twice as fast as EAX .\nUnfortunately the OCB mode has a patent application in place in the USA , which restricts its use .\nThe CCM mode is the predecessor of the EAX mode .\nIt was developed in order to provide a free alternative to OCB .\nThe EAX was developed later to address some issues with CCM [ 18 ] .\nSimilarly to EAX , CCM is also a two-pass mode .\n4 .\nKEY MANAGEMENT\nIn both encryption approaches the encrypted event content has a globally unique identifier ( i.e. the event type or the attribute identifier ) .\nThat identifier is used to determine the encryption key to use when encrypting or decrypting the content .\nEach event type , in event encryption , and attribute , in attribute encryption , has its own individual encryption key .\nBy controlling access to the encryption key we effectively control access to the encrypted event content .\nIn order to control access to the encryption keys we form a key group of brokers for each individual encryption key .\nThe key group is used to refresh the key when necessary and to deliver the new key to all current members of the key group .\nThe key group manager is responsible for verifying that a new member requesting to join the key group is authorised to do so .\nTherefore the key group manager must be trusted by the type owner to enforce the access control policy .\nWe assume that the key group manager is either a trusted third party or alternatively a member of the type owner 's domain .\nIn [ 12 ] Pesonen et al. proposed a capability-based access control architecture for multi-domain publish/subscribe systems .\nThe approach uses capabilities to decentralise the access control policy amongst the publish/subscribe nodes ( i.e. clients and brokers ) : each node holds a set of capabilities that define the authority granted to that node .\nAuthority to access a given event type is granted by the owner of that type issuing a capability to a node .\nThe capability defines the event type , the action , and the attributes that Figure 4 : The steps involved for a broker to be successful in joining a key group the node is authorised to access .\nFor example , a tuple < NP , subscribe , * > would authorise the owner to subscribe to Numberplate events with access to all attributes in the published events .\nThe sequence of events required for a broker to successfully join a key group is shown in Fig. 4 .\nBoth the client hosting broker and the client must be authorised to make the client 's request .\nThat is , if the client makes a subscription request for Numberplate events , both the client and the local broker must be authorised to subscribe to Numberplate events .\nThis is because from the perspective of the broker network , the local broker acts as a proxy for the client .\nWe use the same capabilities to authorise membership in a key group that are used to authorise publish/subscribe requests .\nNot doing so could lead to the inconsistent situation where a SHB is authorised to make a subscription on behalf of its clients , but is not able to decrypt incoming event content for them .\nIn the Numberplate example above , the local broker holding the above capability is authorised to join the Numberplate key group as well as the key groups for all the attributes in the Numberplate event type .\n4.1 Secure Group Communication\nEvent content encryption in a decentralised multi-domain publish/subscribe system can be seen as a sub-category of secure group communication .\nIn both cases the key management system must scale well with the number of clients , clients might be spread over large geographic areas , there might be high rates of churn in group membership , and all members must be synchronised with each other in time in order to use the same encryption key at the same time .\nThere are a number of scalable key management protocols for secure group communication [ 15 ] .\nWe have implemented the One-Way Function Tree ( OFT ) [ 8 ] protocol as a proof of concept .\nWe chose to implement OFT , because of its relatively simplicity and good performance .\nOur implementation uses the same structured overlay network used by the broker network as a transport .\nThe OFT protocol is based on a binary tree where the participants are at the leaves of the tree .\nIt scales in log2n in processing and communication costs , as well as in the size of the state stored at each participant , which we have verified in our simulations .\n4.2 Key Refreshing\nTraditionally in group key management schemes the encryption key is refreshed when a new member joins the group , an\nexisting member leaves the group , or a timer expires .\nRefreshing the key when a new member joins provides backward secrecy , i.e. the new member is prevented from accessing old messages .\nSimilarly refreshing the key when an existing member leaves provides forward secrecy , i.e. the old member is prevented from accessing future messages .\nTimer triggered refreshes are issued periodically in order to limit the damage caused by the current key being compromised .\nEven though the state-of-the-art key management protocols are efficient , refreshing the key unnecessarily introduces extra traffic and processing amongst the key group members .\nIn our case key group membership is based on the broker holding a capability that authorises it to join the key group .\nThe capability has a set of validity conditions that in their simplest form define a time period when the certificate is valid , and in more complex cases involve on-line checks back towards the issuer .\nIn order to avoid unnecessary key refreshes the key manager looks at the certificate validity conditions of the joining or leaving member .\nIn case of a joining member , if the manager can ascertain that the certificate was valid at the time of the previous key refresh , a new key refresh can be avoided .\nSimilarly , instead of refreshing the key immediately when a member leaves the key group , the key manager can cache their credentials and refresh the key only when the credentials expire .\nThese situations are both illustrated in Fig. 5 .\nIt can be assumed that the credentials granted to brokers are relatively static , i.e. once a domain is authorised to access an event type , the authority will be delegated to all brokers of that domain , and they will have the authority for the foreseeable future .\nMore fine grained and dynamic access control would be implemented at the edge of the broker network between the clients and the client hosting brokers .\nWhen an encryption key is refreshed the new key is tagged with a timestamp .\nThe encryption key to use for a given event is selected based on the event 's publication timestamp .\nThe old keys will be kept for a reasonable amount of time in order to allow for some clock drift .\nSetting this value is part of the key management protocol , although exactly how long this time should be will depend on the nature of the application and possibly the size of the network .\nIt can be configured independently per key group if necessary .\n5 .\nEVALUATION\nIn order to evaluate the performance of event content encryption we have implemented both encryption approaches running over our implementation of the Hermes publish / subscribe middleware .\nThe implementation supports three modes : plaintext content , event encryption , and attribute encryption , in a single publish/subscribe system .\nWe ran three performance tests in a discrete event simulator .\nThe simulator was run on an Intel P4 3.2 GHz workstation with 1GB of main memory .\nWe decided to run the tests on an event simulator instead of an actual deployed system in order to be able to measure to aggregate time it takes to handle all messages in the system .\nThe following sections describe the specific test setups and the results in more detail .\n5.1 End-to-End Overhead\nThe end-to-end overhead test shows how much the overall message throughput of the simulator was affected by event content encryption .\nWe formed a broker network with two brokers , attached a publisher to one of them and a subscriber to the other one .\nThe subscriber subscribed to the advertised event type without any filters , i.e. each publication matched the subscriber 's publication and thus was delivered to the subscriber .\nThe test measures the combined time it takes to publish and deliver 100,000 events .\nIf the content is encrypted this includes both encrypting the content at the PHB and decrypting it at the SHB .\nIn the test the number of attributes in the event type is increased from 1 to 25 ( the x-axis ) .\nEach attribute is set to a 30 character string .\nFor each number of attributes in the event type the publisher publishes 100,000 events , and the elapsed time is measured to derive the message throughput .\nThe test was repeated five times for each number of attributes and we use the average of all iterations in the graph , but the results were highly consistent so the standard deviation is not shown .\nThe same tests were run with no content encryption , event encryption , and attribute encryption .\nAs can be seen in Fig. 6 , event content encryption introduces a large overhead compared to not using encryption .\nThe throughput when using attribute encryption with an event type with one attribute is 46 % of the throughput achieved when events are sent in plaintext .\nWhen the number of attributes increases the performance gap increases as well : with ten attributes the performance with attribute encryption has decreased to 11.7 % of plaintext performance .\nEvent encryption fares better , because of fewer encryption operations .\nThe increase in the amount of encrypted data does not affect the performance as much as the number of individual encryption operations does .\nThe difference in performance with event encryption and attribute encryption with only one attribute is caused by the Java object serialisation mechanism : in the event encryption case the whole attribute structure is serialised , which results in more objects than serialising a single attribute value .\nA more efficient implementation would provide its own marshalling mechanism .\nNote that the EAX implementation we use runs the nonce ( i.e. initialisation vector ) through an OMAC construct to increase its randomness .\nSince the nonce is not required to be kept secret ( just unique ) , there is a potential time/space trade-off we have not yet investigated in attaching extra nonce attributes that have already had this OMAC construct applied to them .\n5.2 Domain Internal Events\nWe explained in Sect .\n3.4 that event content decryption and encryption can be avoided if both brokers are authorised to access the event content .\nThis test was designed to show that the use of the encrypted event content mechanism between two authorised brokers incurs only a small performance overhead .\nIn this test we again form a broker network with two brokers .\nFigure 5 : How the key refresh schedule is affected by brokers joining and leaving key groups\nFigure 6 : Throughput of Events in a Simulator\nBoth brokers are configured with the same credentials .\nThe publisher is attached to one of the brokers and the subscriber to the other , and again the subscriber does not specify any filters in its subscription .\nThe publisher publishes 100,000 events and the test measures the elapsed time in order to derive the system 's message throughput .\nThe event content is encrypted outside the timing measurement , i.e. the encryption cost is not included in the measurements .\nThe goal is to model an environment where a broker has received a message from another authorised broker , and it routes the event to a third authorised broker .\nIn this scenario the middle broker does not need to decrypt nor encrypt the event content .\nAs shown in Fig. 2 , the elapsed time was measured as the number of attributes in the published event was increased from 1 to 25 .\nThe attribute values in each case are 30 character strings .\nEach test is repeated five times , and we use the average of all iterations in the graph .\nThe same test was then repeated with no encryption , event encryption and attribute encryption turned on .\nThe encrypted modes follow each other very closely .\nPredictably , the plaintext mode performs a little better for all attribute counts .\nThe difference can be explained partially by the encrypted events being larger in size , because they include both the plaintext and the encrypted content in this test .\nThe difference in performance is 3.7 % with one attribute and 2.5 % with 25 attributes .\nWe believe that the roughness of the graphs can be explained by the Java garbage collector interfering with the simulation .\nThe fact that all three graphs show the same irregularities supports this theory .\nFigure 7 : Throughput of Domain Internal Events\n5.3 Communication Overhead\nThrough the definition of multiple event types , it is possible to emulate the expressiveness of attribute encryption using only event content encryption .\nThe last test we ran was to show the communication overhead caused by this emulation technique , compared to using real attribute encryption .\nIn the test we form a broker network of 2000 brokers .\nWe attach one publisher to one of the brokers , and an increasing number of subscribers to the remaining brokers .\nEach subscriber simulates a group of subscribers that all have the same access rights to the published event .\nEach subscriber group has its own event type in the test .\nThe outcome of this test is shown in Fig. 8 .\nThe number of subscriber groups is increased from 1 to 50 ( the x-axis ) .\nFor each n subscriber groups the publisher publishes one event to represent the use of attribute encryption and n events representing the events for each subscriber group .\nWe count the number of hops each publication makes through the broker network ( y-axis ) .\nNote that Fig. 8 shows workloads beyond what we would expect in common usage , in which many event types are likely to contain fewer than ten attributes .\nThe subscriber groups used in this test represent disjoint permission sets over such event attributes .\nThe number of these sets can be determined from the particular access control policy in use , but will be a value less than or equal to the factorial of the number of attributes in a given event type .\nThe graphs indicate that attribute encryption performs better than event encryption even for small numbers of subscriber groups .\nIndeed , with only two subscriber groups ( e.g. the case with Numberplate events ) the hop count increases from 7.2 hops for attribute encryption to 16.6 hops for event encryption .\nWith 10 subscriber groups the corresponding numbers are 24.2 and 251.0 , i.e. an order of magnitude difference .\n6 .\nRELATED WORK\nWang et al. have categorised the various security issues that need to be addressed in publish/subscribe systems in the future in [ 20 ] .\nThe paper is a comprehensive overview of security issues in publish/subscribe systems and as such tries to draw attention to the issues rather than providing solutions .\nBacon et al. in [ 1 ] examine the use of role-based access control in multi-domain , distributed publish/subscribe systems .\nTheir work is complementary to this paper : distributed RBAC is one potential policy formalism that might use the enforcement mechanisms we have presented .\nOpyrchal and Prakash address the problem of event confidentiality at the last link between the subscriber and the SHB in [ 10 ] .\nThey correctly state that a secure group communication approach is infeasible in an environment like publish/subscribe that has highly dynamic group memberships .\nAs a solution they propose a scheme utilising key caching and subscriber grouping in order to minimise the number of required encryptions when delivering a publication from a SHB to a set of matching subscribers .\nWe assume in our work that the SHB is powerful enough to man\nFigure 8 : Hop Counts When Emulating Attribute Encryption\nage a TLS secured connection for each local subscriber .\nBoth Srivatsa et al. [ 19 ] and Raiciu et al. [ 16 ] present mechanisms for protecting the confidentiality of messages in decentralised publish/subscribe infrastructures .\nCompared to our work both papers aim to provide the means for protecting the integrity and confidentiality of messages whereas the goal for our work is to enforce access control inside the broker network .\nRaiciu et al. assume in their work that none of the brokers in the network are trusted and therefore all events are encrypted from publisher to subscriber and that all matching is based on encrypted events .\nIn contrast , we assume that some of the brokers on the path of a publication are trusted to access that publication and are therefore able to implement event matching .\nWe also assume that the publisher and subscriber hosting brokers are always trusted to access the publication .\nThe contributions of Srivatsa et al. and Raiciu et al. are complementary to the contributions in this paper .\nFinally , Fiege et al. address the related topic of event visibility in [ 6 ] .\nWhile the work concentrated on using scopes as mechanism for structuring large-scale event-based systems , the notion of event visibility does resonate with access control to some extent .\n7 .\nCONCLUSIONS\nEvent content encryption can be used to enforce an access control policy while events are in transit in the broker network of a multi-domain publish/subscribe system .\nEncryption causes an overhead , but i ) there may be no alternative when access control is required , and ii ) the performance penalty can be lessened with implementation optimisations , such as passing cached plaintext content alongside encrypted content between brokers with identical security credentials .\nThis is particularly appropriate if broker-to-broker connections are secured by default so that wire-sniffing is not an issue .\nAttribute level encryption can be implemented in order to enforce fine-grained access control policies .\nIn addition to providing attribute-level access control , attribute encryption enables partially authorised brokers to implement contentbased routing based on the attributes that are accessible to them .\nOur experiments show that i ) by caching plaintext and ciphertext content when possible , we are able to deliver comparable performance to plaintext events , and ii ) that attribute encryption within an event incurs far less overhead than defining separate event types for the attributes that need different levels of protection .\nIn environments comprising multiple domains , where eventbrokers have different security credentials , we have quantified how a trade-off can be made between performance and expressiveness ."} {"id": "J-13", "title": "", "abstract": "", "keyphrases": ["hypergraph", "combinatori auction", "hypertre decomposit", "well-known mechan for resourc and task alloc", "hypertre-base decomposit method", "hypergraph hg", "structur item graph complex", "primal graph simplif", "structur item graph", "fix treewidth", "accept bid price", "polynomi time"], "prmu": [], "lvl-1": "On The Complexity of Combinatorial Auctions: Structured Item Graphs and Hypertree Decompositions [Extended Abstract] Georg Gottlob Computing Laboratory Oxford University OX1 3QD Oxford, UK georg.gottlob@comlab.ox.ac.uk Gianluigi Greco Dipartimento di Matematica University of Calabria I-87030 Rende, Italy ggreco@mat.unical.it ABSTRACT The winner determination problem in combinatorial auctions is the problem of determining the allocation of the items among the bidders that maximizes the sum of the accepted bid prices.\nWhile this problem is in general NPhard, it is known to be feasible in polynomial time on those instances whose associated item graphs have bounded treewidth (called structured item graphs).\nFormally, an item graph is a graph whose nodes are in one-to-one correspondence with items, and edges are such that for any bid, the items occurring in it induce a connected subgraph.\nNote that many item graphs might be associated with a given combinatorial auction, depending on the edges selected for guaranteeing the connectedness.\nIn fact, the tractability of determining whether a structured item graph of a fixed treewidth exists (and if so, computing one) was left as a crucial open problem.\nIn this paper, we solve this problem by proving that the existence of a structured item graph is computationally intractable, even for treewidth 3.\nMotivated by this bad news, we investigate different kinds of structural requirements that can be used to isolate tractable classes of combinatorial auctions.\nWe show that the notion of hypertree decomposition, a recently introduced measure of hypergraph cyclicity, turns out to be most useful here.\nIndeed, we show that the winner determination problem is solvable in polynomial time on instances whose bidder interactions can be represented with (dual) hypergraphs having bounded hypertree width.\nEven more surprisingly, we show that the class of tractable instances identified by means of our approach properly contains the class of instances having a structured item graph.\nCategories and Subject Descriptors J.4 [Computer Applications]: Social and Behavioral Sciences-Economics; F.2 [Theory of Computation]: Analysis of Algorithms and Problem Complexity 1.\nINTRODUCTION Combinatorial auctions.\nCombinatorial auctions are well-known mechanisms for resource and task allocation where bidders are allowed to simultaneously bid on combinations of items.\nThis is desirable when a bidder``s valuation of a bundle of items is not equal to the sum of her valuations of the individual items.\nThis framework is currently used to regulate agents'' interactions in several application domains (cf., e.g., [21]) such as, electricity markets [13], bandwidth auctions [14], and transportation exchanges [18].\nFormally, a combinatorial auction is a pair I, B , where I = {I1, ..., Im} is the set of items the auctioneer has to sell, and B = {B1, ..., Bn} is the set of bids from the buyers interested in the items in I. Each bid Bi has the form item(Bi), pay(Bi) , where pay(Bi) is a rational number denoting the price a buyer offers for the items in item(Bi) \u2286 I.\nAn outcome for I, B is a subset b of B such that item(Bi)\u2229item(Bj) = \u2205, for each pair Bi and Bj of bids in b with i = j.\nThe winner determination problem.\nA crucial problem for combinatorial auctions is to determine the outcome b\u2217 that maximizes the sum of the accepted bid prices (i.e., Bi\u2208b\u2217 pay(Bi)) over all the possible outcomes.\nThis problem, called winner determination problem (e.g., [11]), is known to be intractable, actually NP-hard [17], and even not approximable in polynomial time unless NP = ZPP [19].\nHence, it comes with no surprise that several efforts have been spent to design practically efficient algorithms for general auctions (e.g., [20, 5, 2, 8, 23]) and to identify classes of instances where solving the winner determination problem is feasible in polynomial time (e.g., [15, 22, 12, 21]).\nIn fact, constraining bidder interaction was proven to be useful for identifying classes of tractable combinatorial auctions.\nItem graphs.\nCurrently, the most general class of tractable combinatorial auctions has been singled out by modelling interactions among bidders with the notion of item graph, which is a graph whose nodes are in one-to-one correspondence with items, and edges are such that for any 152 Figure 1: Example MaxWSP problem: (a) Hypergraph H I0,B0 , and a packing h for it; (b) Primal graph for H I0,B0 ; and, (c,d) Two item graphs for H I0,B0 .\nbid, the items occurring in it induce a connected subgraph.\nIndeed, the winner determination problem was proven to be solvable in polynomial time if interactions among bidders can be represented by means of a structured item graph, i.e., a tree or, more generally, a graph having tree-like structure [3]-formally bounded treewidth [16].\nTo have some intuition on how item graphs can be built, we notice that bidder interaction in a combinatorial auction I, B can be represented by means of a hypergraph H I,B such that its set of nodes N(H I,B ) coincides with set of items I, and where its edges E(H I,B ) are precisely the bids of the buyers {item(Bi) | Bi \u2208 B}.\nA special item graph for I, B is the primal graph of H I,B , denoted by G(H I,B ), which contains an edge between any pair of nodes in some hyperedge of H I,B .\nThen, any item graph for H I,B can be viewed as a simplification of G(H I,B ) obtained by deleting some edges, yet preserving the connectivity condition on the nodes included in each hyperedge.\nExample 1.\nThe hypergraph H I0,B0 reported in Figure 1.\n(a) is an encoding for a combinatorial auction I0, B0 , where I0 = {I1, ..., I5}, and item(Bi) = hi, for each 1 \u2264 i \u2264 3.\nThe primal graph for H I0,B0 is reported in Figure 1.\n(b), while two example item graphs are reported in Figure 1.\n(c) and (d), where edges required for maintaining the connectivity for h1 are depicted in bold.\n\u00a1 Open Problem: Computing structured item graphs efficiently.\nThe above mentioned tractability result on structured item graphs turns out to be useful in practice only when a structured item graph either is given or can be efficiently determined.\nHowever, exponentially many item graphs might be associated with a combinatorial auction, and it is not clear how to determine whether a structured item graph of a certain (constant) treewidth exists, and if so, how to compute such a structured item graph efficiently.\nPolynomial time algorithms to find the best simplification of the primal graph were so far only known for the cases where the item graph to be constructed is a line [10], a cycle [4], or a tree [3], but it was an important open problem (cf. [3]) whether it is tractable to check if for a combinatorial auction, an item graph of treewidth bounded by a fixed natural number k exists and can be constructed in polynomial time, if so.\nWeighted Set Packing.\nLet us note that the hypergraph representation H I,B of a combinatorial auction I, B is also useful to make the analogy between the winner determination problem and the maximum weighted-set packing problem on hypergraphs clear (e.g., [17]).\nFormally, a packing h for a hypergraph H is a set of hyperedges of H such that for each pair h, h \u2208 h with h = h , it holds that h \u2229 h = \u2205.\nLetting w be a weighting function for H, i.e., a polynomially-time computable function from E(H) to rational numbers, the weight of a packing h is the rational number w(h) = h\u2208h w(h), where w({}) = 0.\nThen, the maximum-weighted set packing problem for H w.r.t. w, denoted by MaxWSP(H, w), is the problem of finding a packing for H having the maximum weight over all the packings for H. To see that MaxWSP is just a different formulation for the winner determination problem, given a combinatorial auction I, B , it is sufficient to define the weighting function w I,B (item(Bi)) = pay(Bi).\nThen, the set of the solutions for the weighted set packing problem for H I,B w.r.t. w I,B coincides with the set of the solutions for the winner determination problem on I, B .\nExample 2.\nConsider again the hypergraph H I0,B0 reported in Figure 1.\n(a).\nAn example packing for H I0,B0 is h = {h1}, which intuitively corresponds to an outcome for I0, B0 , where the auctioneer accepted the bid B1.\nBy assuming that bids B1, B2, and B3 are such that pay(B1) = pay(B2) = pay(B3), the packing h is not a solution for the problem MaxWSP(H I0,B0 , w I0,B0 ).\nIndeed, the packing h\u2217 = {h2, h3} is such that w I0,B0 (h\u2217 ) > w I0,B0 (h).\n\u00a1 Contributions The primary aim of this paper is to identify large tractable classes for the winner determination problem, that are, moreover polynomially recognizable.\nTowards this aim, we first study structured item graphs and solve the open problem in [3].\nThe result is very bad news: It is NP complete to check whether a combinatorial auction has a structured item graph of treewidth 3.\nMore formally, letting C(ig, k) denote the class of all the hypergraphs having an item tree of treewidth bounded by k, we prove that deciding whether a hypergraph (associated with a combinatorial auction problem) belongs to C(ig, 3) is NP-complete.\nIn the light of this result, it was crucial to assess whether there are some other kinds of structural requirement that can be checked in polynomial time and that can still be used to isolate tractable classes of the maximum weightedset packing problem or, equivalently, the winner determination problem.\nOur investigations, this time, led to very good news which are summarized below: For a hypergraph H, its dual \u00afH = (V, E) is such that nodes in V are in one-to-one correspondence with hyperedges in H, and for each node x \u2208 N(H), {h | x \u2208 h \u2227 h \u2208 153 E(H)} is in E.\nWe show that MaxWSP is tractable on the class of those instances whose dual hypergraphs have hypertree width[7] bounded by k (short: class C(hw, k) of hypergraphs).\nNote that a key issue of the tractability is to consider the hypertree width of the dual hypergraph \u00afH instead of the auction hypergraph H.\nIn fact, we can show that MaxWSP remains NP-hard even when H is acyclic (i.e., when it has hypertree width 1), even when each node is contained in 3 hyperedges at most.\nFor some relevant special classes of hypergraphs in C(hw, k), we design a higly-parallelizeable algorithm for MaxWSP.\nSpecifically, if the weighting functions can be computed in logarithmic space and weights are polynomial (e.g., when all the hyperegdes have unitary weights and one is interested in finding the packing with the maximum number of edges), we show that MaxWSP can be solved by a LOGCFL algorithm.\nRecall, in fact, that LOGCFL is the class of decision problems that are logspace reducible to context free languages, and that LOGCFL \u2286 NC2 \u2286 P (see, e.g., [9]).\nSurprisingly, we show that nothing is lost in terms of generality when considering the hypertree decomposition of dual hypergraphs instead of the treewidth of item graphs.\nTo the contrary, the proposed hypertree-based decomposition method is strictly more general than the method of structured item graphs.\nIn fact, we show that strictly larger classes of instances are tractable according to our new approach than according to the structured item graphs approach.\nIntuitively, the NP-hardness of recognizing bounded-width structured item graphs is thus not due to its great generality, but rather to some peculiarities in its definition.\nThe proof of the above results give us some interesting insight into the notion of structured item graph.\nIndeed, we show that structured item graphs are in one-to-one correspondence with some special kinds of hypertree decomposition of the dual hypergraph, which we call strict hypertree decompositions.\nA game-characterization for the notion of strict hypertree width is also proposed, which specializes the Robber and Marshals game in [6] (proposed to characterize the hypertree width), and which makes it clear the further requirements on hypertree decompositions.\nThe rest of the paper is organized as follows.\nSection 2 discusses the intractability of structured item graphs.\nSection 3 presents the polynomial-time algorithm for solving MaxWSP on the class of those instances whose dual hypergraphs have bounded hypertree width, and discusses the cases where the algorithm is also highly parallelizable.\nThe comparison between the classes C(ig, k) and C(hw, k) is discussed in Section 4.\nFinally, in Section 5 we draw our conclusions by also outlining directions for further research.\n2.\nCOMPLEXITY OF STRUCTURED ITEM GRAPHS Let H be a hypergraph.\nA graph G = (V, E) is an item graph for H if V = N(H) and, for each h \u2208 E(H), the subgraph of G induced over the nodes in h is connected.\nAn important class of item graphs is that of structured item graphs, i.e., of those item graphs having bounded treewidth as formalized below.\nA tree decomposition [16] of a graph G = (V, E) is a pair T, \u03c7 , where T = (N, F) is a tree, and \u03c7 is a labelling function assigning to each vertex p \u2208 N a set of vertices \u03c7(p) \u2286 V , such that the following conditions are satisfied: (1) for each vertex b of G, there exists p \u2208 N such that b \u2208 \u03c7(p); (2) for each edge {b, d} \u2208 E, there exists p \u2208 N such that {b, d} \u2286 \u03c7(p); (3) for each vertex b of G, the set {p \u2208 N | b \u2208 \u03c7(p)} induces a connected subtree of T.\nThe width of T, \u03c7 is the number maxp\u2208N |\u03c7(p) \u2212 1|.\nThe treewidth of G, denoted by tw(G), is the minimum width over all its tree decompositions.\nThe winner determination problem can be solved in polynomial time on item graphs having bounded treewidth [3].\nTheorem 1 (cf. [3]).\nAssume a k-width tree decomposition T, \u03c7 of an item graph for H is given.\nThen, MaxWSP(H, w) can be solved in time O(|T|2 \u00d7(|E(H)|+1)k+1 ).\nMany item graphs can be associated with a hypergraph.\nAs an example, observe that the item graph in Figure 1.\n(c) has treewidth 1, while Figure 1.\n(d) reports an item graph whose treewidth is 2.\nIndeed, it was an open question whether for a given constant k it can be checked in polynomial time if an item graph of treewidth k exists, and if so, whether such an item graph can be efficiently computed.\nLet C(ig, k) denote the class of all the hypergraphs having an item graph G such that tw(G) \u2264 k.\nThe main result of this section is to show that the class C(ig, k) is hard to recognize.\nTheorem 2.\nDeciding whether a hypergraph H belongs to C(ig, 3) is NP-hard.\nThe proof of this result relies on an elaborate reduction from the Hamiltonian path problem HP(s, t) of deciding whether there is an Hamiltonian path from a node s to a node t in a directed graph G = (N, E).\nTo help the intuition, we report here a high-level overview of the main ingredients exploited in the proof1 .\nThe general idea it to build a hypergraph HG such that there is an item graph G for HG with tw(G ) \u2264 3 if and only if HP(s, t) over G has a solution.\nFirst, we discuss the way HG is constructed.\nSee Figure 2.\n(a) for an illustration, where the graph G consists of the nodes s, x, y, and t, and the set of its edges is {e1 = (s, x), e2 = (x, y), e3 = (x, t), e4 = (y, t)}.\nFrom G to HG.\nLet G = (N, E) be a directed graph.\nThen, the set of the nodes in HG is such that: for each x \u2208 N, N(HG) contains the nodes bsx, btx, bx, bx, bdx; for each e = (x, y) \u2208 E, N(HG) contains the nodes nsx, nsx, nty, nty , nse x and nte y. No other node is in N(HG).\nHyperedges in HG are of three kinds: 1) for each x \u2208 N, E(HG) contains the hyperedges: \u2022 Sx = {bsx} \u222a {nse x | e = (x, y) \u2208 E}; \u2022 Tx = {btx} \u222a {nte x | e = (z, x) \u2208 E}; \u2022 A1 x = {bdx, bx}, A2 x = {bdx, bx}, and A3 x = {bx, bx} -notice that these hyperedges induce a clique on the nodes {bx, bx, bdx}; 1 Detailed proofs can be found in the Appendix, available at www.mat.unical.it/\u223cggreco/papers/ca.pdf.\n154 Figure 2: Proof of Theorem 2: (a) from G to HG - hyperedges in 1) and 2) are reported only; (b) a skeleton for a tree decomposition TD for HG.\n\u2022 SA1 x = {bsx, bx}, SA2 x = {bsx, bx}, SA3 x = {bsx, bdx} -notice that these hyperedges plus A1 x, A2 x, and A3 x induce a clique on the nodes {bsx, bx, bx, bdx}; \u2022 TA1 x = {btx, bx}, TA2 x = {btx, bx}, and TA3 x = {btx, bdx} -notice that these hyperedges plus A1 x, A2 x, and A3 x induce a clique on the nodes {btx, bx, bx, bdx}; 2) for each e = (x, y) \u2208 E, E(HG) contains the hyperedges: \u2022 SHx = {nsx, nsx}; \u2022 THy = {nty, nty }; \u2022 SEe = {nsx, nse x} and SEe = {nsx, nse x} -notice that these two hyperedges plus SHx induce a clique on the nodes {nsx, nsx, nse x}; \u2022 TEe = {nty, nte y} and TEe = {nty , nte y} -notice that these two hyperedges plus THy induce a clique on the nodes {nty, nty , nte y}.\nNotice that each of the above hyperedges but those of the form Sx and Tx contains exactly two nodes.\nAs an example of the hyperedges of kind 1) and 2), the reader may refer to the example construction reported in Figure 2.\n(a), and notice, for instance, that Sx = {bsx, nse2 x , nse3 x } and that Tt = {btt, nte4 t , nte3 t }.\n3) finally, we denote by DG the set containing the hyperedges in E(HG) of the third kind.\nIn the reduction we are exploiting, DG can be an arbitrary set of hyperedges satisfying the four conditions that are discussed below.\nLet PG be the set of the following |PG| \u2264 |N| + 3 \u00d7 |E| pairs: PG = {(bx, bx) | x \u2208 N} \u222a {(nsx, nsx), (nty, nty ), (nse x, nte y) | e = (x, y) \u2208 E}.\nAlso, let I(v) denote the set {h \u2208 E(H) | v \u2208 h} of the hyperedges of H that are touched by v; and, for a set V \u2286 N(H), let I(V ) = v\u2208V I(v).\nThen, DG has to be a set such that: (c1) \u2200(\u03b1, \u03b2) \u2208 PG, I(\u03b1) \u2229 I(\u03b2) \u2229 DG = \u2205; (c2) \u2200(\u03b1, \u03b2) \u2208 PG, I(\u03b1) \u222a I(\u03b2) \u2287 DG; (c3) \u2200\u03b1 \u2208 N such that \u2203\u03b2 \u2208 N with (\u03b1, \u03b2) \u2208 PG or (\u03b2, \u03b1) \u2208 PG, it holds: I(\u03b1) \u2229 DG = \u2205; and, (c4) \u2200S \u2286 N such that |S| \u2264 3 and where \u2203\u03b1, \u03b2 \u2208 S with (\u03b1, \u03b2) \u2208 PG, it is the case that: I(S) \u2287 DG.\nIntuitively, the set DG is such that each of its hyperedges is touched by exactly one of the two nodes in every pair 155 of PG - cf. (c1) and (c2).\nMoreover, hyperedges in DG touch only vertices included in at least a pair of PG - cf. (c3); and, any triple of nodes is not capable of touching all the elements of DG if none of the pairs that can be built from it belongs to PG - cf. (c4).\nThe reader may now ask whether a set DG exists at all satisfying (c1), (c2), (c3) and (c4).\nIn the following lemma, we positively answer this question and refer the reader to its proof for an example construction.\nLemma 1.\nA set DG, with |DG| = 2 \u00d7 |PG| + 2, satisfying conditions (c1), (c2), (c3), and (c4) can be built in time O(|PG|2 ).\nKey Ingredients.\nWe are now in the position of presenting an overview of the key ingredients of the proof.\nLet G be an arbitrary item graph for HG, and let TD = T, \u03c7 be a 3-width tree decomposition of G (note that, because of the cliques, e.g., on the nodes {bsx, bx, bx, bdx}, any item graph for HG has treewidth 3 at least).\nThere are three basic observations serving the purpose of proving the correctness of the reduction.\nBlocks of TD: First, we observe that TD must contain some special kinds of vertex.\nSpecifically, for each node x \u2208 N, TD contains a vertex bs(x) such that \u03c7(bs(x)) \u2287 {bsx, bx, bx, bdx}, and a vertex bt(x) such that \u03c7(bt(x)) \u2287 {btx, bx, bx, bdx}.\nAnd, for each edge e = (x, y) \u2208 E, TD contains a vertex ns(x,e) such that \u03c7(ns(x,e)) \u2287 {nse x, nsx, nsx}, and a vertex nt(y,e) such that \u03c7(nt(y,e)) \u2287 {nte y, nty, nty }.\nIntuitively, these vertices are required to cover the cliques of HG associated with the hyperedges of kind 1) and 2).\nEach of these vertices plays a specific role in the reduction.\nIndeed, each directed edge e = (x, y) \u2208 E is encoded in TD by means of the vertices: ns(x,e), representing precisely that e starts from x; and, nt(y,e), representing precisely that e terminates into y. Also, each node x \u2208 N is encoded in TD be means of the vertices: bs(x), representing the starting point of edges originating from x; and, bt(x), representing the terminating point of edges ending into x.\nAs an example, Figure 2.\n(b) reports the skeleton of a tree decomposition TD.\nThe reader may notice in it the blocks defined above and how they are related with the hypergraph HG in Figure 2.\n(a) - other blocks in it (of the form w(x,y)) are defined next.\nConnectedness between blocks, and uniqueness of the connections: The second crucial observation is that in the path connecting a vertex of the form bs(x) (resp., bt(y)) with a vertex of the form ns(x,e) (resp., nt(y,e)) there is one special vertex of the form w(x,y) such that: \u03c7(w(x,y)) \u2287 {nse x , nte y }, for some edge e = (x, y) \u2208 E. Guaranteeing the existence of one such vertex is precisely the role played by the hyperedges in DG.\nThe arguments for the proof are as follows.\nFirst, we observe that I(\u03c7(bs(x))) \u2229 I(\u03c7(ns(x,e))) \u2287 DG \u222a {Sx} and I(\u03c7(bt(y))) \u2229 I(\u03c7(nt(y,e))) \u2287 DG \u222a {Ty}.\nThen, we show a property stating that for a pair of consecutive vertices p and q in the path connecting bs(x) and ns(x,e) (resp., bt(y) and nt(y,e)), I(\u03c7(p) \u2229 \u03c7(q)) \u2287 I(\u03c7(bs(x))) \u2229 I(\u03c7(ns(x,e))) (resp., I(\u03c7(p) \u2229 \u03c7(q)) \u2287 I(\u03c7(bt(x))) \u2229 I(\u03c7(nt(y,e)))).\nThus, we have: I(\u03c7(p) \u2229 \u03c7(q)) \u2287 DG \u222a{Sx} (resp., I(\u03c7(p)\u2229\u03c7(q)) \u2287 DG \u222a{Ty}).\nBased on this observation, and by exploiting the properties of the hyperedges in DG, it is not difficult to show that any pair of consecutive vertices p and q must share two nodes of HG forming a pair in PG, and must both touch Sx (resp., Ty).\nWhen the treewidth of G is 3, we can conclude that a vertex, say w(x,y), in this path is such that \u03c7(w(x,y)) \u2287 {nse x , nte y }, for some edge e = (x, y) \u2208 E - to this end, note that nse x \u2208 Sx, nte t \u2208 Ty, and I(\u03c7(w(x,y))) \u2287 DG.\nIn particular, w(x,y) is the only kind of vertex satisfying these conditions, i.e., in the path there is no further vertex of the form w(x,z), for z = y (resp., w(z,y), for z = x).\nTo help the intuition, we observe that having a vertex of the form w(x,y) in TD corresponds to the selection of an edge from node x to node y in the Hamiltonian path.\nIn fact, given the uniqueness of these vertices selected for ensuring the connectivity, a one-to-one correspondence can be established between the existence of a Hamiltonian path for G and the vertices of the form w(x,y).\nAs an example, in Figure 2.\n(b), the vertices of the form w(s,x), w(x,y), and w(y,t) are in TD, and GT D shows the corresponding Hamiltonian path.\nUnused blocks: Finally, the third ingredient of the proof is the observation that if a vertex of the form w(x,y), for an edge e = (x, y) \u2208 E is not in TD (i.e., if the edge (x, y) does not belong to the Hamiltonian path), then the corresponding block ns(x,e ) (resp., nt(y,e )) can be arbitrarily appended in the subtree rooted at the block ns(x,e) (resp., nt(y,e)), where e is the edge of the form e = (x, z) (resp., e = (z, y)) such that w(x,z) (resp., w(z,y)) is in TD.\nE.g., Figure 2.\n(a) shows w(x,t), which is not used in TD, and Figure 2.\n(b) shows how the blocks ns(x,e3) and nt(t,e3) can be arranged in TD for ensuring the connectedness condition.\n3.\nTRACTABLE CASES VIA HYPERTREE DECOMPOSITIONS Since constructing structured item graphs is intractable, it is relevant to assess whether other structural restrictions can be used to single out classes of tractable MaxWSP instances.\nTo this end, we focus on the notion of hypertree decomposition [7], which is a natural generalization of hypergraph acyclicity and which has been profitably used in other domains, e.g, constraint satisfaction and database query evaluation, to identify tractability islands for NP-hard problems.\nA hypertree for a hypergraph H is a triple T, \u03c7, \u03bb , where T = (N, E) is a rooted tree, and \u03c7 and \u03bb are labelling functions which associate each vertex p \u2208 N with two sets \u03c7(p) \u2286 N(H) and \u03bb(p) \u2286 E(H).\nIf T = (N , E ) is a subtree of T, we define \u03c7(T ) = v\u2208N \u03c7(v).\nWe denote the set of vertices N of T by vertices(T).\nMoreover, for any p \u2208 N, Tp denotes the subtree of T rooted at p. Definition 1.\nA hypertree decomposition of a hypergraph H is a hypertree HD = T, \u03c7, \u03bb for H which satisfies all the following conditions: 1.\nfor each edge h \u2208 E(H), there exists p \u2208 vertices(T) such that h \u2286 \u03c7(p) (we say that p covers h); 156 Figure 3: Example MaxWSP problem: (a) Hypergraph H1; (b) Hypergraph \u00afH1; (b) A 2-width hypertree decomposition of \u00afH1.\n2.\nfor each node Y \u2208 N(H), the set {p \u2208 vertices(T) | Y \u2208 \u03c7(p)} induces a (connected) subtree of T; 3.\nfor each p \u2208 vertices(T), \u03c7(p) \u2286 N(\u03bb(p)); 4.\nfor each p \u2208 vertices(T), N(\u03bb(p)) \u2229 \u03c7(Tp) \u2286 \u03c7(p).\nThe width of a hypertree decomposition T, \u03c7, \u03bb is maxp\u2208vertices(T )|\u03bb(p)|.\nThe HYPERTREE width hw(H) of H is the minimum width over all its hypertree decompositions.\nA hypergraph H is acyclic if hw(H) = 1.\nP Example 3.\nThe hypergraph H I0,B0 reported in Figure 1.\n(a) is an example acyclic hypergraph.\nInstead, both the hypergraphs H1 and \u00afH1 shown in Figure 3.\n(a) and Figure 3.\n(b), respectively, are not acyclic since their hypertree width is 2.\nA 2-width hypertree decomposition for \u00afH1 is reported in Figure 3.\n(c).\nIn particular, observe that H1 has been obtained by adding the two hyperedges h4 and h5 to H I0,B0 to model, for instance, that two new bids, B4 and B5, respectively, have been proposed to the auctioneer.\n\u00a1 In the following, rather than working on the hypergraph H associated with a MaxWSP problem, we shall deal with its dual \u00afH, i.e., with the hypergraph such that its nodes are in one-to-one correspondence with the hyperedges of H, and where for each node x \u2208 N(H), {h | x \u2208 h \u2227 h \u2208 E(H)} is in E( \u00afH).\nAs an example, the reader may want to check again the hypergraph H1 in Figure 3.\n(a) and notice that the hypergraph in Figure 3.\n(b) is in fact its dual.\nThe rationale for this choice is that issuing restrictions on the original hypergraph is a guarantee for the tractability only in very simple scenarios.\nTheorem 3.\nOn the class of acyclic hypergraphs, MaxWSP is (1) in P if each node occurs into two hyperedges at most; and, (2) NP-hard, even if each node is contained into three hyperedges at most.\n3.1 Hypertree Decomposition on the Dual Hypergraph and Tractable Packing Problems For a fixed constant k, let C(hw, k) denote the class of all the hypergraphs whose dual hypergraphs have hypertree width bounded by k.\nThe maximum weighted-set packing problem can be solved in polynomial time on the class C(hw, k) by means of the algorithm ComputeSetPackingk, shown in Figure 4.\nThe algorithm receives in input a hypergraph H, a weighting function w, and a k-width hypertree decomposition HD = T=(N, E), \u03c7, \u03bb of \u00afH. For each vertex v \u2208 N, let Hv be the hypergraph whose set of nodes N(Hv) \u2286 N(H) coincides with \u03bb(v), and whose set of edges E(Hv) \u2286 E(H) coincides with \u03c7(v).\nIn an initialization step, the algorithm equips each vertex v with all the possible packings for Hv, which are stored in the set Hv.\nNote that the size of Hv is bounded by (|E(H)| + 1)k , since each node in \u03bb(v) is either left uncovered in a packing or is covered with precisely one of the hyperedges in \u03c7(v) \u2286 E(H).\nThen, ComputeSetPackingk is designed to filter these packings by retaining only those that conform with some packing for Hc, for each children c of v in T, as formalized next.\nLet hv and hc be two packings for Hv and Hc, respectively.\nWe say that hv conforms with hc, denoted by hv \u2248 hc if: for each h \u2208 hc \u2229 E(Hv), h is in hv; and, for each h \u2208 (E(Hc) \u2212 hc), h is not in hv.\nExample 4.\nConsider again the hypertree decomposition of \u00afH1 reported in Figure 3.\n(c).\nThen, the set of all the possible packings (which are build in the initialization step of ComputeSetPackingk), for each of its vertices, is reFigure 5: Example application of Algorithm ComputeSetPackingk.\n157 Input: H, w, and a k-width hypertree decomposition HD = T =(N, E), \u03c7, \u03bb of \u00afH; Output: A solution to MaxWSP(H, w); var Hv : set of packings for Hv, for each v \u2208 N; h\u2217 : packing for H; v hv : rational number, for each partial packing hv for Hv; hhv,c : partial packing for Hc, for each partial packing hv for Hv, and for each (v, c) \u2208 E; -------------------------------------------Procedure BottomUp; begin Done := the set of all the leaves of T ; while \u2203v \u2208 T such that (i) v \u2208 Done, and (ii) {c | c is child of v} \u2286 Done do for each c such that (v, c) \u2208 E do Hv := Hv \u2212 {hv | \u2203hc \u2208 Hc s.t. hv \u2248 hc}; for each hv \u2208 Hv do v hv := w(hv); for each c such that (v, c) \u2208 E do \u00afhc := arg maxhc\u2208Hc|hv\u2248 hc c hc \u2212 w(hc \u2229 hv) ; hhv,c := \u00afhc; (* set best packing *) v hv := v hv + c \u00afhc \u2212 w(\u00afhc \u2229 hv); end for end for Done := Done \u222a {v}; end while end; -------------------------------------------begin (* MAIN *) for each vertex v in T do Hv := {hv packing for Hv}; BottomUp; let r be the root of T ; \u00afhr := arg maxhr\u2208Hr r hr ; h\u2217 := \u00afhr; (* include packing *) T opDown(r, hr); return h\u2217 ; end.\nProcedure T opDown(v : vertex of N, \u00afhv \u2208 Hv); begin for each c \u2208 N s.t. (v, c) \u2208 E do \u00afhc := h\u00afhv,c; h\u2217 := h\u2217 \u222a \u00afhc; (* include packing *) T opDown(c, \u00afhc); end for end; Figure 4: Algorithm ComputeSetPackingk.\nported in Figure 5.\n(a).\nFor instance, the root v1 is such that Hv1 = { {}, {h1}, {h3}, {h5} }.\nMoreover, an arrow from a packing hc to hv denotes that hv conforms with hc.\nFor instance, the reader may check that the packing {h3} \u2208 Hv1 conforms with the packing {h2, h3} \u2208 Hv3 , but do not conform with {h1} \u2208 Hv3 .\n\u00a1 ComputeSetPackingk builds a solution by traversing T in two phases.\nIn the first phase, vertices of T are processed from the leaves to the root r, by means of the procedure BottomUp.\nFor each node v being processed, the set Hv is preliminary updated by removing all the packings hv that do not conform with any packing for some of the children of v.\nAfter this filtering is performed, the weight hv is updated.\nIntuitively, v hv stores the weight of the best partial packing for H computed by using only the hyperedges occurring in \u03c7(Tv).\nIndeed, if v is a leaf, then v hv = w(hv).\nOtherwise, for each child c of v in T, v hv is updated with the maximum of c hc \u2212 w(hc \u2229 hv) over all the packings hc that conforms with hv (resolving ties arbitrarily).\nThe packing \u00afhc for which this maximum is achieved is stored in the variable hhv,c.\nIn the second phase, the tree T is processed starting from the root.\nFirstly, the packing h\u2217 is selected that maximizes the weight equipped with the packings in Hr.\nThen, procedure TopDown is used to extend h\u2217 to all the other partial packings for vertices of T.\nIn particular, at each vertex v, h\u2217 is extended with the packing hhv,c, for each child c of v. Example 5.\nAssume that, in our running example, w(h1) = w(h2) = w(h3) = w(h4) = 1.\nThen, an execution of ComputeSetPackingk is graphically depicted in Figure 5.\n(b), where an arrow from a packing hc to a packing hv is used to denote that hc = hhv,c. Specifically, the choices made during the computation are such that the packing {h2, h3} is computed.\nIn particular, during the bottom-up phase, we have that: (1) v4 is processed, and we set v4 {h2} = v4 {h4} = 1 and v4 {} = 0; (2) v3 is processed, and we set v3 {h1} = v3 {h3} = 1 and v3 {} = 0; (3) v2 is processed, and we set v2 {h1} = v2 {h2} = v2 {h3} = v2 {h4} = 1, v2 {h2,h3} = 2 and v3 {} = 0; (4) v1 is processed and we set v1 {h1} = 1, v1 {h5} = v1 {h3} = 2 and v1 {} = 0.\nFor instance, note that v1 {h5} = 2 since {h5} conforms with the packing {h4} of Hv2 such that v2 {h4} = 1.\nThen, at the beginning of the top-down phase, ComputeSetPackingk selects {h3} as a packing for Hv1 and propagates this choice in the tree.\nEquivalently, the algorithm may have chosen {h5}.\nAs a further example, the way the solution {h1} is obtained by the algorithm when w(h1) = 5 and w(h2) = w(h3) = w(h4) = 1 is reported in Figure 5.\n(c).\nNotice that, this time, in the top-down phase, ComputeSetPackingk starts selecting {h1} as the best packing for Hv1 .\n\u00a1 Theorem 4.\nLet H be a hypergraph and w be a weighting function for it.\nLet HD = T, \u03c7, \u03bb be a complete k-width hypertree decomposition of \u00afH. Then, ComputeSetPackingk on input H, w, and HD correctly outputs a solution for MaxWSP(H, w) in time O(|T| \u00d7 (|E(H)| + 1)2k ).\nProof.\n[Sketch] We observe that h\u2217 (computed by ComputeSetPackingk) is a packing for H. Indeed, consider a pair of hyperedges h1 and h2 in h\u2217 , and assume, for the sake of contradiction, that h1 \u2229 h2 = \u2205.\nLet v1 (resp., v2) be an arbitrary vertex of T, for which ComputeSetPackingk included h1 (resp., h2) in h\u2217 in the bottom-down computation.\nBy construction, we have h1 \u2208 \u03c7(v1) and h2 \u2208 \u03c7(v2).\n158 Let I be an element in h1 \u2229 h2.\nIn the dual hypergraph H, I is a hyperedge in E( \u00afH) which covers both the nodes h1 and h2.\nHence, by condition (1) in Definition 1, there is a vertex v \u2208 vertices(T) such that {h1, h2} \u2286 \u03c7(v).\nNote that, because of the connectedness condition in Definition 1, we can also assume, w.l.o.g., that v is in the path connecting v1 and v2 in T. Let hv \u2208 Hv denote the element added by ComputeSetPackingk into h\u2217 during the bottom-down phase.\nSince the elements in Hv are packings for Hv, it is the case that either h1 \u2208 hv or h2 \u2208 hv.\nAssume, w.l.o.g., that h1 \u2208 hv, and notice that each vertex w in T in the path connecting v to v1 is such that h1 \u2208 \u03c7(w), because of the connectedness condition.\nHence, because of definition of conformance, the packing hw selected by ComputeSetPackingk to be added at vertex w in h\u2217 must be such that h1 \u2208 hw.\nThis holds in particular for w = v1.\nContradiction with the definition of v1.\nTherefore, h\u2217 is a packing for H.\nIt remains then to show that it has the maximum weight over all the packings for H. To this aim, we can use structural induction on T to prove that, in the bottom-up phase, the variable v hv is updated to contain the weight of the packing on the edges in \u03c7(Tv), which contains hv and which has the maximum weight over all such packings for the edges in \u03c7(Tv).\nThen, the result follows, since in the top-down phase, the packing hr giving the maximum weight over \u03c7(Tr) = E(H) is first included in h\u2217 , and then extended at each node c with the packing hhv,c conformingly with hv and such that the maximum value of v hv is achieved.\nAs for the complexity, observe that the initialization step requires the construction of the set Hv, for each vertex v, and each set has size (|E(H)| + 1)k at most.\nThen, the function BottomUp checks for the conformance between strategies in Hv with strategies in Hc, for each pair (v, c) \u2208 E, and updates the weight v hv .\nThese tasks can be carried out in time O((|E(H)| + 1)2k ) and must be repeated for each edge in T, i.e., O(|T|) times.\nFinally, the function TopDown can be implemented in linear time in the size of T, since it just requires updating h\u2217 by accessing the variable hhv,c.\nThe above result shows that if a hypertree decomposition of width k is given, the MaxWSP problem can be efficiently solved.\nMoreover, differently from the case of structured item graphs, it is well known that deciding the existence of a k-bounded hypertree decomposition and computing one (if any) are problems which can be efficiently solved in polynomial time [7].\nTherefore, Theorem 4 witnesses that the class C(hw, k) actually constitutes a tractable class for the winner determination problem.\nAs the following theorem shows, for large subclasses (that depend only on how the weight function is specified), MaxWSP(H, w) is even highly parallelizeable.\nLet us call a weighting function smooth if it is logspace computable and if all weights are polynomial (and thus just require O(log n) bits for their representation).\nRecall that LOGCFL is a parallel complexity class contained in NC2, cf. [9].\nThe functional version of LOGCFL is LLOGCFL , which is obtained by equipping a logspace transducer with an oracle in LOGCFL.\nTheorem 5.\nLet H be a hypergraph in C(hw, k), and let w be a smooth weighting function for it.\nThen, MaxWSP(H, w) is in LLOGCFL .\n4.\nHYPERTREE DECOMPOSITIONS VS STRUCTURED ITEM GRAPHS Given that the class C(hw, k) has been shown to be an island of tractability for the winner determination problem, and given that the class C(ig, k) has been shown not to be efficiently recognizable, one may be inclined to think that there are instances having unbounded hypertree width, but admitting an item graph of bounded tree width (so that the intractability of structured item graphs would lie in their generality).\nSurprisingly, we establish this is not the case.\nThe line of the proof is to first show that structured item graphs are in one-to-one correspondence with a special kind of hypertree decompositions of the dual hypergraph, which we shall call strict.\nThen, the result will follow by proving that k-width strict hypertree decompositions are less powerful than kwith hypertree decompositions.\n4.1 Strict Hypertree Decompositions Let H be a hypergraph, and let V \u2286 N(H) be a set of nodes and X, Y \u2208 N(H).\nX is [V ]-adjacent to Y if there exists an edge h \u2208 E(H) such that {X, Y } \u2286 (h \u2212 V ).\nA [V ]-path \u03c0 from X to Y is a sequence X = X0, ... , X = Y of variables such that: Xi is [V ]-adjacent to Xi+1, for each i \u2208 [0... -1].\nA set W \u2286 N(H) of nodes is [V ]-connected if \u2200X, Y \u2208 W there is a [V ]-path from X to Y .\nA [V ]-component is a maximal [V ]-connected non-empty set of nodes W \u2286 (N(H) \u2212 V ).\nFor any [V ]-component C, let E(C) = {h \u2208 E(H) | h \u2229 C = \u2205}.\nDefinition 2.\nA hypertree decomposition HD = T, \u03c7, \u03bb of H is strict if the following conditions hold: 1.\nfor each pair of vertices r and s in vertices(T) such that s is a child of r, and for each [\u03c7(r)]-component Cr s.t. Cr \u2229 \u03c7(Ts) = \u2205, Cr is a [\u03c7(r) \u2229 N(\u03bb(r) \u2229 \u03bb(s))]-component; 2.\nfor each edge h \u2208 E(H), there is a vertex p such that h \u2208 \u03bb(p) and h \u2286 \u03c7(p) (we say p strongly covers h); 3.\nfor each edge h \u2208 E(H), the set {p \u2208 vertices(T) | h \u2208 \u03bb(p)} induces a (connected) subtree of T.\nThe strict hypertree width shw(H) of H is the minimum width over all its strict hypertree decompositions.\nP The basic relationship between nice hypertree decompositions and structured item graphs is shown in the following theorem.\nTheorem 6.\nLet H be a hypergraph such that for each node v \u2208 N(H), {v} is in E(H).\nThen, a k-width tree decomposition of an item graph for H exists if and only if \u00afH has a (k + 1)-width strict hypertree decomposition2 .\nNote that, as far as the maximum weighted-set packing problem is concerned, given a hypergraph H, we can always assume that for each node v \u2208 N(H), {v} is in E(H).\nIn fact, if this hyperedge is not in the hypergraph, then it can be added without loss of generality, by setting w({v}) = 0.\nTherefore, letting C(shw, k) denote the class of all the hypergraphs whose dual hypergraphs (associated with maximum 2 The term +1 only plays the technical role of taking care of the different definition of width for tree decompositions and hypertree decompositions.\n159 weighted-set packing problems) have strict hypertree width bounded by k, we have that C(shw, k + 1) = C(ig, k).\nBy definition, strict hypertree decompositions are special hypertree decompositions.\nIn fact, we are able to show that the additional conditions in Definition 2 induce an actual restriction on the decomposition power.\nTheorem 7.\nC(ig, k) = C(shw, k + 1) \u2282 C(hw, k + 1).\nA Game Theoretic View.\nWe shed further lights on strict hypertree decompositions by discussing an interesting characterization based on the strict Robber and Marshals Game, defined by adapting the Robber and Marshals game defined in [6], which characterizes hypertree width.\nThe game is played on a hypergraph H by a robber against k marshals which act in coordination.\nMarshals move on the hyperedges of H, while the robber moves on nodes of H.\nThe robber sees where the marshals intend to move, and reacts by moving to another node which is connected with its current position and through a path in G(H) which does not use any node contained in a hyperedge that is occupied by the marshals before and after their move-we say that these hyperedges are blocked.\nNote that in the basic game defined in [6], the robber is not allowed to move on vertices that are occupied by the marshals before and after their move, even if they do not belong to blocked hyperedges.\nImportantly, marshals are required to play monotonically, i.e., they cannot occupy an edge that was previously occupied in the game, and which is currently not.\nThe marshals win the game if they capture the robber, by occupying an edge covering a node where the robber is.\nOtherwise, the robber wins.\nTheorem 8.\nLet H be a hypergraph such that for each node v \u2208 N(H), {v} is in E(H).\nThen, \u00afH has a k-width strict hypertree decomposition if and only if k marshals can win the strict Robber and Marshals Game on \u00afH, no matter of the robber``s moves.\n5.\nCONCLUSIONS We have solved the open question of determining the complexity of computing a structured item graph associated with a combinatorial auction scenario.\nThe result is bad news, since it turned out that it is NP-complete to check whether a combinatorial auction has a structured item graph, even for treewidth 3.\nMotivated by this result, we investigated the use of hypertree decomposition (on the dual hypergraph associated with the scenario) and we shown that the problem is tractable on the class of those instances whose dual hypergraphs have bounded hypertree width.\nFor some special, yet relevant cases, a highly parallelizable algorithm is also discussed.\nInterestingly, it also emerged that the class of structured item graphs is properly contained in the class of instances having bounded hypertree width (hence, the reason of their intractability is not their generality).\nIn particular, the latter result is established by showing a precise relationship between structured item graphs and restricted forms of hypertree decompositions (on the dual hypergraph), called query decompositions (see, e.g., [7]).\nIn the light of this observation, we note that proving some approximability results for structured item graphs requires a deep understanding of the approximability of query decompositions, which is currently missing in the literature.\nAs a further avenue of research, it would be relevant to enhance the algorithm ComputeSetPackingk, e.g., by using specialized data structures, in order to avoid the quadratic dependency from (|E(H)| + 1)k .\nFinally, an other interesting question is to assess whether the structural decomposition techniques discussed in the paper can be used to efficiently deal with generalizations of the winner determination problem.\nFor instance, it might be relevant in several application scenarios to design algorithms that can find a selling strategy when several copies of the same item are available for selling, and when moreover the auctioneer is satisfied when at least a given number of copies is actually sold.\nAcknowledgement G. Gottlob``s work was supported by the EC3 - E-Commerce Competence Center (Vienna) and by a Royal Society Wolfson Research Merit Award.\nIn particular, this Award allowed Gottlob to invite G. Greco for a research visit to Oxford.\nIn addition, G. Greco is supported by ICAR-CNR, and by M.I.U.R. under project TOCAI.IT.\n6.\nREFERENCES [1] I. Adler, G. Gottlob, and M. Grohe.\nHypertree-Width and Related Hypergraph Invariants.\nIn Proc.\nof EUROCOMB``05, pages 5-10, 2005.\n[2] C. Boutilier.\nSolving Concisely Expressed Combinatorial Auction Problems.\nIn Proc.\nof AAAI``02, pages 359-366, 2002.\n[3] V. Conitzer, J. Derryberry, and T. Sandholm.\nCombinatorial auctions with structured item graphs.\nIn Proc.\nof AAAI``04, pages 212-218, 2004.\n[4] E. M. Eschen and J. P. Sinrad.\nAn o(n2 ) algorithm for circular-arc graph recognition.\nIn Proc.\nof SODA``93, pages 128-137, 1993.\n[5] Y. Fujishima, K. Leyton-Brown, and Y. Shoham.\nTaming the computational complexity of combinatorial auctions: Optimal and approximate.\nIn Proc.\nof IJCAI``99, pages 548-553, 1999.\n[6] G. Gottlob, N. Leone, and F. Scarcello.\nRobbers, marshals, and guards: game theoretic and logical characterizations of hypertree width.\nJournal of Computer and System Sciences, 66(4):775-808, 2003.\n[7] G. Gottlob, N. Leone, and S. Scarcello.\nHypertree decompositions and tractable queries.\nJournal of Computer and System Sciences, 63(3):579-627, 2002.\n[8] H. H. Hoos and C. Boutilier.\nSolving combinatorial auctions using stochastic local search.\nIn Proc.\nof AAAI``00, pages 22-29, 2000.\n[9] D. Johnson.\nA Catalog of Complexity Classes.\nIn P. Cramton, Y. Shoham, and R. Steinberg, editors, Handbook of Theoretical Computer Science, Volume A: Algorithms and Complexity, pages 67-161.\n1990.\n[10] N. Korte and R. H. Mohring.\nAn incremental linear-time algorithm for recognizing interval graphs.\nSIAM Journal on Computing, 18(1):68-81, 1989.\n[11] D. Lehmann, R. M\u00a8uller, and T. Sandholm.\nThe Winner Determination Problem.\nIn P. Cramton, Y. Shoham, and R. Steinberg, editors, Combinatorial Auctions.\nMIT Press, 2006.\n[12] D. Lehmann, L. I. O``Callaghan, and Y. Shoham.\nTruth revelation in approximately efficient 160 combinatorial auctions.\nJ. ACM, 49(5):577-602, 2002.\n[13] R. McAfee and J. McMillan.\nAnalyzing the airwaves auction.\nJournal of Economic Perspectives, 10(1):159175, 1996.\n[14] J. McMillan.\nSelling spectrum rights.\nJournal of Economic Perspectives, 8(3):145-62, 1994.\n[15] N. Nisan.\nBidding and allocation in combinatorial auctions.\nIn Proc.\nof EC``00, pages 1-12, 2000.\n[16] N. Robertson and P. Seymour.\nGraph minors ii.\nalgorithmic aspects of tree width.\nJournal of Algorithms, 7:309-322, 1986.\n[17] M. H. Rothkopf, A. Pekec, and R. M. Harstad.\nComputationally manageable combinatorial auctions.\nManagement Science, 44:1131-1147, 1998.\n[18] T. Sandholm.\nAn implementation of the contract net protocol based on marginal cost calculations.\nIn Proc.\nof AAAI``93, pages 256-262, 1993.\n[19] T. Sandholm.\nAlgorithm for optimal winner determination in combinatorial auctions.\nArtificial Intelligence, 135(1-2):1-54, 2002.\n[20] T. Sandholm.\nWinner determination algorithms.\nIn P. Cramton, Y. Shoham, and R. Steinberg, editors, Combinatorial Auctions.\nMIT Press, 2006.\n[21] T. Sandholm and S. Suri.\nBob: Improved winner determination in combinatorial auctions and generalizations.\nArtificial Intelligence, 7:33-58, 2003.\n[22] M. Tennenholtz.\nSome tractable combinatorial auctions.\nIn Proc.\nof AAAI``00, pages 98-103, 2000.\n[23] E. Zurel and N. Nisan.\nAn efficient approximate allocation algorithm for combinatorial auctions.\nIn Proc.\nof EC``01, pages 125-136, 2001.\n161", "lvl-3": "On The Complexity of Combinatorial Auctions : Structured Item Graphs and Hypertree Decompositions\nABSTRACT\nThe winner determination problem in combinatorial auctions is the problem of determining the allocation of the items among the bidders that maximizes the sum of the accepted bid prices .\nWhile this problem is in general NPhard , it is known to be feasible in polynomial time on those instances whose associated item graphs have bounded treewidth ( called structured item graphs ) .\nFormally , an item graph is a graph whose nodes are in one-to-one correspondence with items , and edges are such that for any bid , the items occurring in it induce a connected subgraph .\nNote that many item graphs might be associated with a given combinatorial auction , depending on the edges selected for guaranteeing the connectedness .\nIn fact , the tractability of determining whether a structured item graph of a fixed treewidth exists ( and if so , computing one ) was left as a crucial open problem .\nIn this paper , we solve this problem by proving that the existence of a structured item graph is computationally intractable , even for treewidth 3 .\nMotivated by this bad news , we investigate different kinds of structural requirements that can be used to isolate tractable classes of combinatorial auctions .\nWe show that the notion of hypertree decomposition , a recently introduced measure of hypergraph cyclicity , turns out to be most useful here .\nIndeed , we show that the winner determination problem is solvable in polynomial time on instances whose bidder interactions can be represented with ( dual ) hypergraphs having bounded hypertree width .\nEven more surprisingly , we show that the class of tractable instances identified by means of our approach properly contains the class of instances having a structured item graph .\n1 .\nINTRODUCTION\nCombinatorial auctions .\nCombinatorial auctions are well-known mechanisms for resource and task allocation where bidders are allowed to simultaneously bid on combinations of items .\nThis is desirable when a bidder 's valuation of a bundle of items is not equal to the sum of her valuations of the individual items .\nThis framework is currently used to regulate agents ' interactions in several application domains ( cf. , e.g. , [ 21 ] ) such as , electricity markets [ 13 ] , bandwidth auctions [ 14 ] , and transportation exchanges [ 18 ] .\nFormally , a combinatorial auction is a pair ( Z , B ) , where Z = { I1 , ... , Im } is the set of items the auctioneer has to sell , and B = { B1 , ... , Bn } is the set of bids from the buyers interested in the items in Z. Each bid Bi has the form ( item ( Bi ) , pay ( Bi ) ) , where pay ( Bi ) is a rational number denoting the price a buyer offers for the items in item ( Bi ) C Z .\nAn outcome for ( Z , B ) is a subset b of B such that item ( Bi ) n item ( Bj ) = 0 , for each pair Bi and Bj of bids in b with i = ~ j .\nThe winner determination problem .\nA crucial problem for combinatorial auctions is to determine the outcome b \u2217 that maximizes the sum of the accepted bid prices ( i.e. ,\nBi \u2208 b \u2217 pay ( Bi ) ) over all the possible outcomes .\nThis problem , called winner determination problem ( e.g. , [ 11 ] ) , is known to be intractable , actually NP-hard [ 17 ] , and even not approximable in polynomial time unless NP = ZPP [ 19 ] .\nHence , it comes with no surprise that several efforts have been spent to design practically efficient algorithms for general auctions ( e.g. , [ 20 , 5 , 2 , 8 , 23 ] ) and to identify classes of instances where solving the winner determination problem is feasible in polynomial time ( e.g. , [ 15 , 22 , 12 , 21 ] ) .\nIn fact , constraining bidder interaction was proven to be useful for identifying classes of tractable combinatorial auctions .\nItem graphs .\nCurrently , the most general class of tractable combinatorial auctions has been singled out by modelling interactions among bidders with the notion of item graph , which is a graph whose nodes are in one-to-one correspondence with items , and edges are such that for any\nFigure 1 : Example MaxWSP problem : ( a ) Hypergraph H ( To , go ) , and a packing h for it ; ( b ) Primal graph for H ( To , go ) ; and , ( c , d ) Two item graphs for H ( To , go ) .\nbid , the items occurring in it induce a connected subgraph .\nIndeed , the winner determination problem was proven to be solvable in polynomial time if interactions among bidders can be represented by means of a structured item graph , i.e. , a tree or , more generally , a graph having tree-like structure [ 3 ] -- formally bounded treewidth [ 16 ] .\nTo have some intuition on how item graphs can be built , we notice that bidder interaction in a combinatorial auction ~ I , B ~ can be represented by means of a hypergraph H ( T , g ) such that its set of nodes N ( H ( T , g ) ) coincides with set of items I , and where its edges E ( H ( T , g ) ) are precisely the bids of the buyers { item ( Bi ) | Bi \u2208 B } .\nA special item graph for ~ I , B ~ is the primal graph of H ( T , g ) , denoted by G ( H ( T , g ) ) , which contains an edge between any pair of nodes in some hyperedge of H ( T , g ) .\nThen , any item graph for H ( T , g ) can be viewed as a simplification of G ( H ( T , g ) ) obtained by deleting some edges , yet preserving the connectivity condition on the nodes included in each hyperedge .\nEXAMPLE 1 .\nThe hypergraph H ( To , go ) reported in Figure 1 .\n( a ) is an encoding for a combinatorial auction ~ I0 , B0 ~ , where I0 = { I1 , ... , I5 } , and item ( Bi ) = hi , for each 1 \u2264 i \u2264 3 .\nThe primal graph for H ( To , go ) is reported in\nFigure 1 .\n( b ) , while two example item graphs are reported in Figure 1 .\n( c ) and ( d ) , where edges required for maintaining\nthe connectivity for h1 are depicted in bold .\n<\nOpen Problem : Computing structured item\ngraphs efficiently .\nThe above mentioned tractability result on structured item graphs turns out to be useful in practice only when a structured item graph either is given or can be efficiently determined .\nHowever , exponentially many item graphs might be associated with a combinatorial auction , and it is not clear how to determine whether a structured item graph of a certain ( constant ) treewidth exists , and if so , how to compute such a structured item graph efficiently .\nPolynomial time algorithms to find the `` best '' simplification of the primal graph were so far only known for the cases where the item graph to be constructed is a line [ 10 ] , a cycle [ 4 ] , or a tree [ 3 ] , but it was an important open problem ( cf. [ 3 ] ) whether it is tractable to check if for a combinatorial auction , an item graph of treewidth bounded by a fixed natural number k exists and can be constructed in polynomial time , if so .\nWeighted Set Packing .\nLet us note that the hypergraph representation H ( T , g ) of a combinatorial auction ~ I , B ~ is also useful to make the analogy between the winner determination problem and the maximum weighted-set packing problem on hypergraphs clear ( e.g. , [ 17 ] ) .\nFormally , a packing h for a hypergraph H is a set of hyperedges of H such that for each pair h , h ' \u2208 h with h = ~ h ' , it holds that h \u2229 h ' = \u2205 .\nLetting w be a weighting function for H , i.e. , a polynomially-time computable function from E ( H ) to rational numbers , the weight of a packing h is the rational number w ( h ) = EhCh w ( h ) , where w ( { } ) = 0 .\nThen , the maximum-weighted set packing problem for H w.r.t. w , denoted by MaxWSP ( H , w ) , is the problem of finding a packing for H having the maximum weight over all the packings for H. To see that MaxWSP is just a different formulation for the winner determination problem , given a combinatorial auction ~ I , B ~ , it is sufficient to define the weighting function w ( T , g ) ( item ( Bi ) ) = pay ( Bi ) .\nThen , the set of the solutions for the weighted set packing problem for H ( T , g ) w.r.t. w ( T , g ) coincides with the set of the solutions for the winner determination problem on ~ I , B ~ .\nEXAMPLE 2 .\nConsider again the hypergraph H ( To , go ) reported in Figure 1 .\n( a ) .\nAn example packing for H ( To , go ) is h = { h1 } , which intuitively corresponds to an outcome for ~ I0 , B0 ~ , where the auctioneer accepted the bid B1 .\nBy assuming that bids B1 , B2 , and B3 are such that pay ( B1 ) = pay ( B2 ) = pay ( B3 ) , the packing h is not a solution for the problem MaxWSP ( H ( To , go ) , w ( To , go ) ) .\nIndeed , the packing\nContributions\nThe primary aim of this paper is to identify large tractable classes for the winner determination problem , that are , moreover polynomially recognizable .\nTowards this aim , we first study structured item graphs and solve the open problem in [ 3 ] .\nThe result is very bad news : \u25ba It is NP complete to check whether a combinatorial auction has a structured item graph of treewidth 3 .\nMore formally , letting C ( ig , k ) denote the class of all the hypergraphs having an item tree of treewidth bounded by k , we prove that deciding whether a hypergraph ( associated with a combinatorial auction problem ) belongs to C ( ig , 3 ) is NP-complete .\nIn the light of this result , it was crucial to assess whether there are some other kinds of structural requirement that can be checked in polynomial time and that can still be used to isolate tractable classes of the maximum weightedset packing problem or , equivalently , the winner determination problem .\nOur investigations , this time , led to very good news which are summarized below :\n\u25ba For a hypergraph H , its dual H \u00af = ( V , E ) is such that nodes in V are in one-to-one correspondence with hyperedges in H , and for each node x \u2208 N ( H ) , { h | x \u2208 h \u2227 h \u2208\nE ( H ) } is in E .\nWe show that MaxWSP is tractable on the class of those instances whose dual hypergraphs have hypertree width [ 7 ] bounded by k ( short : class C ( hw , k ) of hypergraphs ) .\nNote that a key issue of the tractability is to consider the hypertree width of the dual hypergraph H \u00af instead of the auction hypergraph H .\nIn fact , we can show that MaxWSP remains NP-hard even when H is acyclic ( i.e. , when it has hypertree width 1 ) , even when each node is contained in 3 hyperedges at most .\n\u25ba For some relevant special classes of hypergraphs in C ( hw , k ) , we design a higly-parallelizeable algorithm for MaxWSP .\nSpecifically , if the weighting functions can be computed in logarithmic space and weights are polynomial ( e.g. , when all the hyperegdes have unitary weights and one is interested in finding the packing with the maximum number of edges ) , we show that MaxWSP can be solved by a LOGCFL algorithm .\nRecall , in fact , that LOGCFL is the class of decision problems that are logspace reducible to context free languages , and that LOGCFL C _ NC2 C _ P ( see , e.g. , [ 9 ] ) .\n\u25ba Surprisingly , we show that nothing is lost in terms of generality when considering the hypertree decomposition of dual hypergraphs instead of the treewidth of item graphs .\nTo the contrary , the proposed hypertree-based decomposition method is strictly more general than the method of structured item graphs .\nIn fact , we show that strictly larger classes of instances are tractable according to our new approach than according to the structured item graphs approach .\nIntuitively , the NP-hardness of recognizing bounded-width structured item graphs is thus not due to its great generality , but rather to some peculiarities in its definition .\n\u25ba The proof of the above results give us some interesting insight into the notion of structured item graph .\nIndeed , we show that structured item graphs are in one-to-one correspondence with some special kinds of hypertree decomposition of the dual hypergraph , which we call strict hypertree decompositions .\nA game-characterization for the notion of strict hypertree width is also proposed , which specializes the Robber and Marshals game in [ 6 ] ( proposed to characterize the hypertree width ) , and which makes it clear the further requirements on hypertree decompositions .\nThe rest of the paper is organized as follows .\nSection 2 discusses the intractability of structured item graphs .\nSection 3 presents the polynomial-time algorithm for solving MaxWSP on the class of those instances whose dual hypergraphs have bounded hypertree width , and discusses the cases where the algorithm is also highly parallelizable .\nThe comparison between the classes C ( ig , k ) and C ( hw , k ) is discussed in Section 4 .\nFinally , in Section 5 we draw our conclusions by also outlining directions for further research .\n2 .\nCOMPLEXITY OF STRUCTURED ITEM GRAPHS\nConnectedness between blocks ,\n3 .\nTRACTABLE CASES VIA HYPERTREE DECOMPOSITIONS\n3.1 Hypertree Decomposition on the Dual Hypergraph and Tractable Packing Problems\n4 .\nHYPERTREE DECOMPOSITIONS VS STRUCTURED ITEM GRAPHS\n4.1 Strict Hypertree Decompositions\n5 .\nCONCLUSIONS\nWe have solved the open question of determining the complexity of computing a structured item graph associated with a combinatorial auction scenario .\nThe result is bad news , since it turned out that it is NP-complete to check whether a combinatorial auction has a structured item graph , even for treewidth 3 .\nMotivated by this result , we investigated the use of hypertree decomposition ( on the dual hypergraph associated with the scenario ) and we shown that the problem is tractable on the class of those instances whose dual hypergraphs have bounded hypertree width .\nFor some special , yet relevant cases , a highly parallelizable algorithm is also discussed .\nInterestingly , it also emerged that the class of structured item graphs is properly contained in the class of instances having bounded hypertree width ( hence , the reason of their intractability is not their generality ) .\nIn particular , the latter result is established by showing a precise relationship between structured item graphs and restricted forms of hypertree decompositions ( on the dual hypergraph ) , called query decompositions ( see , e.g. , [ 7 ] ) .\nIn the light of this observation , we note that proving some approximability results for structured item graphs requires a deep understanding of the approximability of query decompositions , which is currently missing in the literature .\nAs a further avenue of research , it would be relevant to enhance the algorithm ComputeSetPackingk , e.g. , by using specialized data structures , in order to avoid the quadratic dependency from ( | E ( H ) | + 1 ) k. Finally , an other interesting question is to assess whether the structural decomposition techniques discussed in the paper can be used to efficiently deal with generalizations of the winner determination problem .\nFor instance , it might be relevant in several application scenarios to design algorithms that can find a selling strategy when several copies of the same item are available for selling , and when moreover the auctioneer is satisfied when at least a given number of copies is actually sold .", "lvl-4": "On The Complexity of Combinatorial Auctions : Structured Item Graphs and Hypertree Decompositions\nABSTRACT\nThe winner determination problem in combinatorial auctions is the problem of determining the allocation of the items among the bidders that maximizes the sum of the accepted bid prices .\nWhile this problem is in general NPhard , it is known to be feasible in polynomial time on those instances whose associated item graphs have bounded treewidth ( called structured item graphs ) .\nFormally , an item graph is a graph whose nodes are in one-to-one correspondence with items , and edges are such that for any bid , the items occurring in it induce a connected subgraph .\nNote that many item graphs might be associated with a given combinatorial auction , depending on the edges selected for guaranteeing the connectedness .\nIn fact , the tractability of determining whether a structured item graph of a fixed treewidth exists ( and if so , computing one ) was left as a crucial open problem .\nIn this paper , we solve this problem by proving that the existence of a structured item graph is computationally intractable , even for treewidth 3 .\nMotivated by this bad news , we investigate different kinds of structural requirements that can be used to isolate tractable classes of combinatorial auctions .\nWe show that the notion of hypertree decomposition , a recently introduced measure of hypergraph cyclicity , turns out to be most useful here .\nIndeed , we show that the winner determination problem is solvable in polynomial time on instances whose bidder interactions can be represented with ( dual ) hypergraphs having bounded hypertree width .\nEven more surprisingly , we show that the class of tractable instances identified by means of our approach properly contains the class of instances having a structured item graph .\n1 .\nINTRODUCTION\nCombinatorial auctions .\nCombinatorial auctions are well-known mechanisms for resource and task allocation where bidders are allowed to simultaneously bid on combinations of items .\nThis is desirable when a bidder 's valuation of a bundle of items is not equal to the sum of her valuations of the individual items .\nAn outcome for ( Z , B ) is a subset b of B such that item ( Bi ) n item ( Bj ) = 0 , for each pair Bi and Bj of bids in b with i = ~ j .\nThe winner determination problem .\nA crucial problem for combinatorial auctions is to determine the outcome b \u2217 that maximizes the sum of the accepted bid prices ( i.e. ,\nBi \u2208 b \u2217 pay ( Bi ) ) over all the possible outcomes .\nThis problem , called winner determination problem ( e.g. , [ 11 ] ) , is known to be intractable , actually NP-hard [ 17 ] , and even not approximable in polynomial time unless NP = ZPP [ 19 ] .\nHence , it comes with no surprise that several efforts have been spent to design practically efficient algorithms for general auctions ( e.g. , [ 20 , 5 , 2 , 8 , 23 ] ) and to identify classes of instances where solving the winner determination problem is feasible in polynomial time ( e.g. , [ 15 , 22 , 12 , 21 ] ) .\nIn fact , constraining bidder interaction was proven to be useful for identifying classes of tractable combinatorial auctions .\nItem graphs .\nCurrently , the most general class of tractable combinatorial auctions has been singled out by modelling interactions among bidders with the notion of item graph , which is a graph whose nodes are in one-to-one correspondence with items , and edges are such that for any\nFigure 1 : Example MaxWSP problem : ( a ) Hypergraph H ( To , go ) , and a packing h for it ; ( b ) Primal graph for H ( To , go ) ; and , ( c , d ) Two item graphs for H ( To , go ) .\nbid , the items occurring in it induce a connected subgraph .\nIndeed , the winner determination problem was proven to be solvable in polynomial time if interactions among bidders can be represented by means of a structured item graph , i.e. , a tree or , more generally , a graph having tree-like structure [ 3 ] -- formally bounded treewidth [ 16 ] .\nTo have some intuition on how item graphs can be built , we notice that bidder interaction in a combinatorial auction ~ I , B ~ can be represented by means of a hypergraph H ( T , g ) such that its set of nodes N ( H ( T , g ) ) coincides with set of items I , and where its edges E ( H ( T , g ) ) are precisely the bids of the buyers { item ( Bi ) | Bi \u2208 B } .\nA special item graph for ~ I , B ~ is the primal graph of H ( T , g ) , denoted by G ( H ( T , g ) ) , which contains an edge between any pair of nodes in some hyperedge of H ( T , g ) .\nThen , any item graph for H ( T , g ) can be viewed as a simplification of G ( H ( T , g ) ) obtained by deleting some edges , yet preserving the connectivity condition on the nodes included in each hyperedge .\nEXAMPLE 1 .\nThe hypergraph H ( To , go ) reported in Figure 1 .\n( a ) is an encoding for a combinatorial auction ~ I0 , B0 ~ , where I0 = { I1 , ... , I5 } , and item ( Bi ) = hi , for each 1 \u2264 i \u2264 3 .\nThe primal graph for H ( To , go ) is reported in\nFigure 1 .\n( b ) , while two example item graphs are reported in Figure 1 .\n( c ) and ( d ) , where edges required for maintaining\nthe connectivity for h1 are depicted in bold .\n<\nOpen Problem : Computing structured item\ngraphs efficiently .\nThe above mentioned tractability result on structured item graphs turns out to be useful in practice only when a structured item graph either is given or can be efficiently determined .\nHowever , exponentially many item graphs might be associated with a combinatorial auction , and it is not clear how to determine whether a structured item graph of a certain ( constant ) treewidth exists , and if so , how to compute such a structured item graph efficiently .\nWeighted Set Packing .\nLet us note that the hypergraph representation H ( T , g ) of a combinatorial auction ~ I , B ~ is also useful to make the analogy between the winner determination problem and the maximum weighted-set packing problem on hypergraphs clear ( e.g. , [ 17 ] ) .\nFormally , a packing h for a hypergraph H is a set of hyperedges of H such that for each pair h , h ' \u2208 h with h = ~ h ' , it holds that h \u2229 h ' = \u2205 .\nThen , the set of the solutions for the weighted set packing problem for H ( T , g ) w.r.t. w ( T , g ) coincides with the set of the solutions for the winner determination problem on ~ I , B ~ .\nEXAMPLE 2 .\nConsider again the hypergraph H ( To , go ) reported in Figure 1 .\n( a ) .\nAn example packing for H ( To , go ) is h = { h1 } , which intuitively corresponds to an outcome for ~ I0 , B0 ~ , where the auctioneer accepted the bid B1 .\nIndeed , the packing\nContributions\nThe primary aim of this paper is to identify large tractable classes for the winner determination problem , that are , moreover polynomially recognizable .\nTowards this aim , we first study structured item graphs and solve the open problem in [ 3 ] .\nThe result is very bad news : \u25ba It is NP complete to check whether a combinatorial auction has a structured item graph of treewidth 3 .\nMore formally , letting C ( ig , k ) denote the class of all the hypergraphs having an item tree of treewidth bounded by k , we prove that deciding whether a hypergraph ( associated with a combinatorial auction problem ) belongs to C ( ig , 3 ) is NP-complete .\nIn the light of this result , it was crucial to assess whether there are some other kinds of structural requirement that can be checked in polynomial time and that can still be used to isolate tractable classes of the maximum weightedset packing problem or , equivalently , the winner determination problem .\nE ( H ) } is in E .\nWe show that MaxWSP is tractable on the class of those instances whose dual hypergraphs have hypertree width [ 7 ] bounded by k ( short : class C ( hw , k ) of hypergraphs ) .\nNote that a key issue of the tractability is to consider the hypertree width of the dual hypergraph H \u00af instead of the auction hypergraph H .\nIn fact , we can show that MaxWSP remains NP-hard even when H is acyclic ( i.e. , when it has hypertree width 1 ) , even when each node is contained in 3 hyperedges at most .\n\u25ba For some relevant special classes of hypergraphs in C ( hw , k ) , we design a higly-parallelizeable algorithm for MaxWSP .\nRecall , in fact , that LOGCFL is the class of decision problems that are logspace reducible to context free languages , and that LOGCFL C _ NC2 C _ P ( see , e.g. , [ 9 ] ) .\n\u25ba Surprisingly , we show that nothing is lost in terms of generality when considering the hypertree decomposition of dual hypergraphs instead of the treewidth of item graphs .\nTo the contrary , the proposed hypertree-based decomposition method is strictly more general than the method of structured item graphs .\nIn fact , we show that strictly larger classes of instances are tractable according to our new approach than according to the structured item graphs approach .\nIntuitively , the NP-hardness of recognizing bounded-width structured item graphs is thus not due to its great generality , but rather to some peculiarities in its definition .\n\u25ba The proof of the above results give us some interesting insight into the notion of structured item graph .\nIndeed , we show that structured item graphs are in one-to-one correspondence with some special kinds of hypertree decomposition of the dual hypergraph , which we call strict hypertree decompositions .\nThe rest of the paper is organized as follows .\nSection 2 discusses the intractability of structured item graphs .\nSection 3 presents the polynomial-time algorithm for solving MaxWSP on the class of those instances whose dual hypergraphs have bounded hypertree width , and discusses the cases where the algorithm is also highly parallelizable .\nThe comparison between the classes C ( ig , k ) and C ( hw , k ) is discussed in Section 4 .\nFinally , in Section 5 we draw our conclusions by also outlining directions for further research .\n5 .\nCONCLUSIONS\nWe have solved the open question of determining the complexity of computing a structured item graph associated with a combinatorial auction scenario .\nThe result is bad news , since it turned out that it is NP-complete to check whether a combinatorial auction has a structured item graph , even for treewidth 3 .\nMotivated by this result , we investigated the use of hypertree decomposition ( on the dual hypergraph associated with the scenario ) and we shown that the problem is tractable on the class of those instances whose dual hypergraphs have bounded hypertree width .\nFor some special , yet relevant cases , a highly parallelizable algorithm is also discussed .\nInterestingly , it also emerged that the class of structured item graphs is properly contained in the class of instances having bounded hypertree width ( hence , the reason of their intractability is not their generality ) .\nIn particular , the latter result is established by showing a precise relationship between structured item graphs and restricted forms of hypertree decompositions ( on the dual hypergraph ) , called query decompositions ( see , e.g. , [ 7 ] ) .\nIn the light of this observation , we note that proving some approximability results for structured item graphs requires a deep understanding of the approximability of query decompositions , which is currently missing in the literature .", "lvl-2": "On The Complexity of Combinatorial Auctions : Structured Item Graphs and Hypertree Decompositions\nABSTRACT\nThe winner determination problem in combinatorial auctions is the problem of determining the allocation of the items among the bidders that maximizes the sum of the accepted bid prices .\nWhile this problem is in general NPhard , it is known to be feasible in polynomial time on those instances whose associated item graphs have bounded treewidth ( called structured item graphs ) .\nFormally , an item graph is a graph whose nodes are in one-to-one correspondence with items , and edges are such that for any bid , the items occurring in it induce a connected subgraph .\nNote that many item graphs might be associated with a given combinatorial auction , depending on the edges selected for guaranteeing the connectedness .\nIn fact , the tractability of determining whether a structured item graph of a fixed treewidth exists ( and if so , computing one ) was left as a crucial open problem .\nIn this paper , we solve this problem by proving that the existence of a structured item graph is computationally intractable , even for treewidth 3 .\nMotivated by this bad news , we investigate different kinds of structural requirements that can be used to isolate tractable classes of combinatorial auctions .\nWe show that the notion of hypertree decomposition , a recently introduced measure of hypergraph cyclicity , turns out to be most useful here .\nIndeed , we show that the winner determination problem is solvable in polynomial time on instances whose bidder interactions can be represented with ( dual ) hypergraphs having bounded hypertree width .\nEven more surprisingly , we show that the class of tractable instances identified by means of our approach properly contains the class of instances having a structured item graph .\n1 .\nINTRODUCTION\nCombinatorial auctions .\nCombinatorial auctions are well-known mechanisms for resource and task allocation where bidders are allowed to simultaneously bid on combinations of items .\nThis is desirable when a bidder 's valuation of a bundle of items is not equal to the sum of her valuations of the individual items .\nThis framework is currently used to regulate agents ' interactions in several application domains ( cf. , e.g. , [ 21 ] ) such as , electricity markets [ 13 ] , bandwidth auctions [ 14 ] , and transportation exchanges [ 18 ] .\nFormally , a combinatorial auction is a pair ( Z , B ) , where Z = { I1 , ... , Im } is the set of items the auctioneer has to sell , and B = { B1 , ... , Bn } is the set of bids from the buyers interested in the items in Z. Each bid Bi has the form ( item ( Bi ) , pay ( Bi ) ) , where pay ( Bi ) is a rational number denoting the price a buyer offers for the items in item ( Bi ) C Z .\nAn outcome for ( Z , B ) is a subset b of B such that item ( Bi ) n item ( Bj ) = 0 , for each pair Bi and Bj of bids in b with i = ~ j .\nThe winner determination problem .\nA crucial problem for combinatorial auctions is to determine the outcome b \u2217 that maximizes the sum of the accepted bid prices ( i.e. ,\nBi \u2208 b \u2217 pay ( Bi ) ) over all the possible outcomes .\nThis problem , called winner determination problem ( e.g. , [ 11 ] ) , is known to be intractable , actually NP-hard [ 17 ] , and even not approximable in polynomial time unless NP = ZPP [ 19 ] .\nHence , it comes with no surprise that several efforts have been spent to design practically efficient algorithms for general auctions ( e.g. , [ 20 , 5 , 2 , 8 , 23 ] ) and to identify classes of instances where solving the winner determination problem is feasible in polynomial time ( e.g. , [ 15 , 22 , 12 , 21 ] ) .\nIn fact , constraining bidder interaction was proven to be useful for identifying classes of tractable combinatorial auctions .\nItem graphs .\nCurrently , the most general class of tractable combinatorial auctions has been singled out by modelling interactions among bidders with the notion of item graph , which is a graph whose nodes are in one-to-one correspondence with items , and edges are such that for any\nFigure 1 : Example MaxWSP problem : ( a ) Hypergraph H ( To , go ) , and a packing h for it ; ( b ) Primal graph for H ( To , go ) ; and , ( c , d ) Two item graphs for H ( To , go ) .\nbid , the items occurring in it induce a connected subgraph .\nIndeed , the winner determination problem was proven to be solvable in polynomial time if interactions among bidders can be represented by means of a structured item graph , i.e. , a tree or , more generally , a graph having tree-like structure [ 3 ] -- formally bounded treewidth [ 16 ] .\nTo have some intuition on how item graphs can be built , we notice that bidder interaction in a combinatorial auction ~ I , B ~ can be represented by means of a hypergraph H ( T , g ) such that its set of nodes N ( H ( T , g ) ) coincides with set of items I , and where its edges E ( H ( T , g ) ) are precisely the bids of the buyers { item ( Bi ) | Bi \u2208 B } .\nA special item graph for ~ I , B ~ is the primal graph of H ( T , g ) , denoted by G ( H ( T , g ) ) , which contains an edge between any pair of nodes in some hyperedge of H ( T , g ) .\nThen , any item graph for H ( T , g ) can be viewed as a simplification of G ( H ( T , g ) ) obtained by deleting some edges , yet preserving the connectivity condition on the nodes included in each hyperedge .\nEXAMPLE 1 .\nThe hypergraph H ( To , go ) reported in Figure 1 .\n( a ) is an encoding for a combinatorial auction ~ I0 , B0 ~ , where I0 = { I1 , ... , I5 } , and item ( Bi ) = hi , for each 1 \u2264 i \u2264 3 .\nThe primal graph for H ( To , go ) is reported in\nFigure 1 .\n( b ) , while two example item graphs are reported in Figure 1 .\n( c ) and ( d ) , where edges required for maintaining\nthe connectivity for h1 are depicted in bold .\n<\nOpen Problem : Computing structured item\ngraphs efficiently .\nThe above mentioned tractability result on structured item graphs turns out to be useful in practice only when a structured item graph either is given or can be efficiently determined .\nHowever , exponentially many item graphs might be associated with a combinatorial auction , and it is not clear how to determine whether a structured item graph of a certain ( constant ) treewidth exists , and if so , how to compute such a structured item graph efficiently .\nPolynomial time algorithms to find the `` best '' simplification of the primal graph were so far only known for the cases where the item graph to be constructed is a line [ 10 ] , a cycle [ 4 ] , or a tree [ 3 ] , but it was an important open problem ( cf. [ 3 ] ) whether it is tractable to check if for a combinatorial auction , an item graph of treewidth bounded by a fixed natural number k exists and can be constructed in polynomial time , if so .\nWeighted Set Packing .\nLet us note that the hypergraph representation H ( T , g ) of a combinatorial auction ~ I , B ~ is also useful to make the analogy between the winner determination problem and the maximum weighted-set packing problem on hypergraphs clear ( e.g. , [ 17 ] ) .\nFormally , a packing h for a hypergraph H is a set of hyperedges of H such that for each pair h , h ' \u2208 h with h = ~ h ' , it holds that h \u2229 h ' = \u2205 .\nLetting w be a weighting function for H , i.e. , a polynomially-time computable function from E ( H ) to rational numbers , the weight of a packing h is the rational number w ( h ) = EhCh w ( h ) , where w ( { } ) = 0 .\nThen , the maximum-weighted set packing problem for H w.r.t. w , denoted by MaxWSP ( H , w ) , is the problem of finding a packing for H having the maximum weight over all the packings for H. To see that MaxWSP is just a different formulation for the winner determination problem , given a combinatorial auction ~ I , B ~ , it is sufficient to define the weighting function w ( T , g ) ( item ( Bi ) ) = pay ( Bi ) .\nThen , the set of the solutions for the weighted set packing problem for H ( T , g ) w.r.t. w ( T , g ) coincides with the set of the solutions for the winner determination problem on ~ I , B ~ .\nEXAMPLE 2 .\nConsider again the hypergraph H ( To , go ) reported in Figure 1 .\n( a ) .\nAn example packing for H ( To , go ) is h = { h1 } , which intuitively corresponds to an outcome for ~ I0 , B0 ~ , where the auctioneer accepted the bid B1 .\nBy assuming that bids B1 , B2 , and B3 are such that pay ( B1 ) = pay ( B2 ) = pay ( B3 ) , the packing h is not a solution for the problem MaxWSP ( H ( To , go ) , w ( To , go ) ) .\nIndeed , the packing\nContributions\nThe primary aim of this paper is to identify large tractable classes for the winner determination problem , that are , moreover polynomially recognizable .\nTowards this aim , we first study structured item graphs and solve the open problem in [ 3 ] .\nThe result is very bad news : \u25ba It is NP complete to check whether a combinatorial auction has a structured item graph of treewidth 3 .\nMore formally , letting C ( ig , k ) denote the class of all the hypergraphs having an item tree of treewidth bounded by k , we prove that deciding whether a hypergraph ( associated with a combinatorial auction problem ) belongs to C ( ig , 3 ) is NP-complete .\nIn the light of this result , it was crucial to assess whether there are some other kinds of structural requirement that can be checked in polynomial time and that can still be used to isolate tractable classes of the maximum weightedset packing problem or , equivalently , the winner determination problem .\nOur investigations , this time , led to very good news which are summarized below :\n\u25ba For a hypergraph H , its dual H \u00af = ( V , E ) is such that nodes in V are in one-to-one correspondence with hyperedges in H , and for each node x \u2208 N ( H ) , { h | x \u2208 h \u2227 h \u2208\nE ( H ) } is in E .\nWe show that MaxWSP is tractable on the class of those instances whose dual hypergraphs have hypertree width [ 7 ] bounded by k ( short : class C ( hw , k ) of hypergraphs ) .\nNote that a key issue of the tractability is to consider the hypertree width of the dual hypergraph H \u00af instead of the auction hypergraph H .\nIn fact , we can show that MaxWSP remains NP-hard even when H is acyclic ( i.e. , when it has hypertree width 1 ) , even when each node is contained in 3 hyperedges at most .\n\u25ba For some relevant special classes of hypergraphs in C ( hw , k ) , we design a higly-parallelizeable algorithm for MaxWSP .\nSpecifically , if the weighting functions can be computed in logarithmic space and weights are polynomial ( e.g. , when all the hyperegdes have unitary weights and one is interested in finding the packing with the maximum number of edges ) , we show that MaxWSP can be solved by a LOGCFL algorithm .\nRecall , in fact , that LOGCFL is the class of decision problems that are logspace reducible to context free languages , and that LOGCFL C _ NC2 C _ P ( see , e.g. , [ 9 ] ) .\n\u25ba Surprisingly , we show that nothing is lost in terms of generality when considering the hypertree decomposition of dual hypergraphs instead of the treewidth of item graphs .\nTo the contrary , the proposed hypertree-based decomposition method is strictly more general than the method of structured item graphs .\nIn fact , we show that strictly larger classes of instances are tractable according to our new approach than according to the structured item graphs approach .\nIntuitively , the NP-hardness of recognizing bounded-width structured item graphs is thus not due to its great generality , but rather to some peculiarities in its definition .\n\u25ba The proof of the above results give us some interesting insight into the notion of structured item graph .\nIndeed , we show that structured item graphs are in one-to-one correspondence with some special kinds of hypertree decomposition of the dual hypergraph , which we call strict hypertree decompositions .\nA game-characterization for the notion of strict hypertree width is also proposed , which specializes the Robber and Marshals game in [ 6 ] ( proposed to characterize the hypertree width ) , and which makes it clear the further requirements on hypertree decompositions .\nThe rest of the paper is organized as follows .\nSection 2 discusses the intractability of structured item graphs .\nSection 3 presents the polynomial-time algorithm for solving MaxWSP on the class of those instances whose dual hypergraphs have bounded hypertree width , and discusses the cases where the algorithm is also highly parallelizable .\nThe comparison between the classes C ( ig , k ) and C ( hw , k ) is discussed in Section 4 .\nFinally , in Section 5 we draw our conclusions by also outlining directions for further research .\n2 .\nCOMPLEXITY OF STRUCTURED ITEM GRAPHS\nLet H be a hypergraph .\nA graph G = ( V , E ) is an item graph for H if V = N ( H ) and , for each h \u2208 E ( H ) , the subgraph of G induced over the nodes in h is connected .\nAn important class of item graphs is that of structured item graphs , i.e. , of those item graphs having bounded treewidth as formalized below .\nA tree decomposition [ 16 ] of a graph G = ( V , E ) is a pair ( T , \u03c7 ) , where T = ( N , F ) is a tree , and \u03c7 is a labelling function assigning to each vertex p \u2208 N a set of vertices \u03c7 ( p ) C _ V , such that the following conditions are satisfied : ( 1 ) for each vertex b of G , there exists p \u2208 N such that b \u2208 \u03c7 ( p ) ; ( 2 ) for each edge { b , d } \u2208 E , there exists p \u2208 N such that { b , d } C _ \u03c7 ( p ) ; ( 3 ) for each vertex b of G , the set { p \u2208 N | b \u2208 \u03c7 ( p ) } induces a connected subtree of T .\nThe width of ( T , \u03c7 ) is the number maxpEN | \u03c7 ( p ) \u2212 1 | .\nThe treewidth of G , denoted by tw ( G ) , is the minimum width over all its tree decompositions .\nThe winner determination problem can be solved in polynomial time on item graphs having bounded treewidth [ 3 ] .\nTHEOREM 1 ( CF. [ 3 ] ) .\nAssume a k-width tree decomposition ( T , \u03c7 ) of an item graph for H is given .\nThen , MaxWSP ( H , w ) can be solved in time O ( | T | 2 \u00d7 ( | E ( H ) | +1 ) k +1 ) .\nMany item graphs can be associated with a hypergraph .\nAs an example , observe that the item graph in Figure 1 .\n( c ) has treewidth 1 , while Figure 1 .\n( d ) reports an item graph whose treewidth is 2 .\nIndeed , it was an open question whether for a given constant k it can be checked in polynomial time if an item graph of treewidth k exists , and if so , whether such an item graph can be efficiently computed .\nLet C ( ig , k ) denote the class of all the hypergraphs having an item graph G such that tw ( G ) < k .\nThe main result of this section is to show that the class C ( ig , k ) is hard to recognize .\nThe proof of this result relies on an elaborate reduction from the Hamiltonian path problem HP ( s , t ) of deciding whether there is an Hamiltonian path from a node s to a node t in a directed graph G = ( N , E ) .\nTo help the intuition , we report here a high-level overview of the main ingredients exploited in the proof1 .\nThe general idea it to build a hypergraph HG such that there is an item graph G ' for HG with tw ( G ' ) < 3 if and only if HP ( s , t ) over G has a solution .\nFirst , we discuss the way HG is constructed .\nSee Figure 2 .\n( a ) for an illustration , where the graph G consists of the nodes s , x , y , and t , and the set of its edges is { e1 = ( s , x ) , e2 = ( x , y ) , e3 = ( x , t ) , e4 = ( y , t ) } .\nFrom G to HG .\nLet G = ( N , E ) be a directed graph .\nThen , the set of the nodes in HG is such that : for each x \u2208 N , N ( HG ) contains the nodes bsx , btx , b ' x , b ' ' x , bdx ; for each e = ( x , y ) \u2208 E , N ( HG ) contains the nodes ns ' x , ns ' ' x , nt ' y , nt ' ' y , nsex and ntey .\nNo other node is in N ( HG ) .\nHyperedges in HG are of three kinds :\n1 ) for each x \u2208 N , E ( HG ) contains the hyperedges : \u2022 Sx = { bsx } \u222a { nsex | e = ( x , y ) \u2208 E } ; \u2022 Tx = { btx } \u222a { ntex | e = ( z , x ) \u2208 E } ; \u2022 A1x = { bdx , b ' x } , A2x = { bdx , b ' ' x } , and A3x = { b ' x , b ' ' x } -- notice that these hyperedges induce a clique on the nodes { b ' x , b ' ' x , bdx } ;\nFigure 2 : Proof of Theorem 2 : ( a ) from G to HG -- hyperedges in 1 ) and 2 ) are reported only ; ( b ) a skeleton for a tree decomposition TD for HG .\n\u2022 SA1x = { bsx , b ' x } , SA2x = { bsx , b ' ' x } , SA3x = { bsx , bdx } -- notice that these hyperedges plus A1x , A2x , and A3x induce a clique on the nodes { bsx , b ' x , b ' ' x , bdx } ; \u2022 TA1x = { btx , b ' x } , TA2x = { btx , b ' ' x } , and TA3x = { btx , bdx } -- notice that these hyperedges plus A1x , A2x , and A3x induce a clique on the nodes { btx , b ' x , b ' ' x , bdx } ; 2 ) for each e = ( x , y ) \u2208 E , E ( HG ) contains the hyperedges : \u2022 SHx = { ns ' x , ns ' ' x } ; \u2022 THy = { nt ' y , nt ' ' y } ; \u2022 SE'e = { ns ' x , nsex } and SE ' ' e = { ns ' ' x , nsex } -- notice that these two hyperedges plus SHx induce a clique on the nodes { ns ' x , ns ' ' x , nsxe } ; \u2022 TE'e = { nt ' y , ntey } and TE ' ' e = { nt ' ' y , ntey } -- notice that these two hyperedges plus THy induce a clique on the nodes { nt ' y , nt ' ' y , ntey } .\nNotice that each of the above hyperedges but those of the form Sx and Tx contains exactly two nodes .\nAs an example of the hyperedges of kind 1 ) and 2 ) , the reader may refer to the example construction reported in Figure 2 .\n( a ) , and notice , for instance , that Sx = { bsx , nse2 x , nse3\n3 ) finally , we denote by DG the set containing the hyperedges in E ( HG ) of the third kind .\nIn the reduction we are exploiting , DG can be an arbitrary set of hyperedges satisfying the four conditions that are discussed below .\nLet PG be the set of the following | PG | \u2264 | N | + 3 \u00d7 | E | pairs : PG = { ( b ' x , b ' ' x ) | x \u2208 N } \u222a { ( ns ' x , ns ' ' x ) , ( nt ' y , nt ' ' y ) , ( nsex , nte y ) | e = ( x , y ) \u2208 E } .\nAlso , let I ( v ) denote the set { h \u2208 E ( H ) | v \u2208 h } of the hyperedges of H that are touched by v ; and , for a set V \u2286 N ( H ) , let I ( V ) = UvcV I ( v ) .\nThen , DG has to be a set such that :\n( c1 ) \u2200 ( \u03b1 , ,3 ) \u2208 PG , I ( \u03b1 ) \u2229 I ( ,3 ) \u2229 DG = \u2205 ; ( c2 ) \u2200 ( \u03b1 , ,3 ) \u2208 PG , I ( \u03b1 ) \u222a I ( ,3 ) \u2287 DG ; ( c3 ) \u2200 \u03b1 \u2208 N such that ~ \u2203 ,3 \u2208 N with ( \u03b1 , ,3 ) \u2208 PG or ( ,3 , \u03b1 ) \u2208 PG , it holds : I ( \u03b1 ) \u2229 DG = \u2205 ; and , ( c4 ) \u2200 S \u2286 N such that | S | \u2264 3 and where ~ \u2203 \u03b1 , ,3 \u2208 S with ( \u03b1 , ,3 ) \u2208 PG , it is the case that : I ( S ) \u2287 ~ DG .\nIntuitively , the set DG is such that each of its hyperedges is touched by exactly one of the two nodes in every pair\nof PG -- cf. ( c1 ) and ( c2 ) .\nMoreover , hyperedges in DG touch only vertices included in at least a pair of PG -- cf. ( c3 ) ; and , any triple of nodes is not capable of touching all the elements of DG if none of the pairs that can be built from it belongs to PG -- cf. ( c4 ) .\nThe reader may now ask whether a set DG exists at all satisfying ( c1 ) , ( c2 ) , ( c3 ) and ( c4 ) .\nIn the following lemma , we positively answer this question and refer the reader to its proof for an example construction .\nLEMMA 1 .\nA set DG , with | DG | = 2 \u00d7 | PG | + 2 , satisfying conditions ( c1 ) , ( c2 ) , ( c3 ) , and ( c4 ) can be built in time O ( | PG | 2 ) .\nKey Ingredients .\nWe are now in the position of presenting an overview of the key ingredients of the proof .\nLet G ' be an arbitrary item graph for HG , and let TD = ~ T , \u03c7 ~ be a 3-width tree decomposition of G ' ( note that , because of the cliques , e.g. , on the nodes { bsx , b ' x , b ' ' x , bdx } , any item graph for HG has treewidth 3 at least ) .\nThere are three basic observations serving the purpose of proving the correctness of the reduction .\n`` Blocks '' of TD : First , we observe that TD must contain some special kinds of vertex .\nSpecifically , for each node x \u2208 N , TD contains a vertex bs ( x ) such that\nIntuitively , these vertices are required to cover the cliques of HG associated with the hyperedges of kind 1 ) and 2 ) .\nEach of these vertices plays a specific role in the reduction .\nIndeed , each directed edge e = ( x , y ) \u2208 E is encoded in TD by means of the vertices : ns ( x , e ) , representing precisely that e starts from x ; and , nt ( y , e ) , representing precisely that e terminates into y. Also , each node x \u2208 N is encoded in TD be means of the vertices : bs ( x ) , representing the starting point of edges originating from x ; and , bt ( x ) , representing the terminating point of edges ending into x .\nAs an example , Figure 2 .\n( b ) reports the `` skeleton '' of a tree decomposition TD .\nThe reader may notice in it the blocks defined above and how they are related with the hypergraph HG in Figure 2 .\n( a ) -- other blocks in it ( of the form w ( x , y ) ) are defined next .\nConnectedness between blocks ,\nand uniqueness of the connections : The second crucial observation is that in the path connecting a vertex of the form bs ( x ) ( resp. , bt ( y ) ) with a vertex of the form ns ( x , e ) ( resp. , nt ( y , e ) ) there is one special vertex of the form w ( x , y ) such that : \u03c7 ( w ( x , y ) ) \u2287 { nseix , ntei y } , for some edge e ' = ( x , y ) \u2208 E. Guaranteeing the existence of one such vertex is precisely the role played by the hyperedges in DG .\nThe arguments for the proof are as follows .\nFirst , we observe that I ( \u03c7 ( bs ( x ) ) ) \u2229 I ( \u03c7 ( ns ( x , e ) ) ) \u2287 DG \u222a { Sx } and I ( \u03c7 ( bt ( y ) ) ) \u2229 I ( \u03c7 ( nt ( y , e ) ) ) \u2287 DG \u222a { Ty } .\nThen , we show a property stating that for a pair of consecutive vertices p and q in the path connecting bs ( x ) and ns ( x , e ) ( resp. , bt ( y ) and nt ( y , e ) ) , I ( \u03c7 ( p ) \u2229 \u03c7 ( q ) ) \u2287 I ( \u03c7 ( bs ( x ) ) ) \u2229 I ( \u03c7 ( ns ( x , e ) ) ) ( resp. , I ( \u03c7 ( p ) \u2229 \u03c7 ( q ) ) \u2287\nBased on this observation , and by exploiting the properties of the hyperedges in DG , it is not difficult to show that any pair of consecutive vertices p and q must share two nodes of HG forming a pair in PG , and must both touch Sx ( resp. , Ty ) .\nWhen the treewidth of G ' is 3 , we can conclude that a vertex , say w ( x , y ) , in this path is such that \u03c7 ( w ( x , y ) ) \u2287 { nseix , ntei y } , for some edge e ' = ( x , y ) \u2208 E -- to this end , note that \u2208 Ty , and I ( \u03c7 ( w ( x , y ) ) ) \u2287 DG .\nIn particular , w ( x , y ) is the only kind of vertex satisfying these conditions , i.e. , in the path there is no further vertex of the form w ( x , z ) , for z = ~ y ( resp. , w ( z , y ) , for z = ~ x ) .\nTo help the intuition , we observe that having a vertex of the form w ( x , y ) in TD corresponds to the selection of an edge from node x to node y in the Hamiltonian path .\nIn fact , given the uniqueness of these vertices selected for ensuring the connectivity , a one-to-one correspondence can be established between the existence of a Hamiltonian path for G and the vertices of the form w ( x , y ) .\nAs an example , in Figure 2 .\n( b ) , the vertices of the form w ( s , x ) , w ( x , y ) , and w ( y , t ) are in TD , and GTD shows the corresponding Hamiltonian path .\nUnused blocks : Finally , the third ingredient of the proof is the observation that if a vertex of the form w ( x , y ) , for an edge e ' = ( x , y ) \u2208 E is not in TD ( i.e. , if the edge ( x , y ) does not belong to the Hamiltonian path ) , then the corresponding block ns ( x , ei ) ( resp. , nt ( y , ei ) ) can be arbitrarily appended in the subtree rooted at the block ns ( x , e ) ( resp. , nt ( y , e ) ) , where e is the edge of the form e = ( x , z ) ( resp. , e = ( z , y ) ) such that w ( x , z ) ( resp. , w ( z , y ) ) is in TD .\nE.g. , Figure 2 .\n( a ) shows w ( x , t ) , which is not used in TD , and Figure 2 .\n( b ) shows how the blocks ns ( x , e3 ) and nt ( t , e3 ) can be arranged in TD for ensuring the connectedness condition .\n3 .\nTRACTABLE CASES VIA HYPERTREE DECOMPOSITIONS\nSince constructing structured item graphs is intractable , it is relevant to assess whether other structural restrictions can be used to single out classes of tractable MaxWSP instances .\nTo this end , we focus on the notion of hypertree decomposition [ 7 ] , which is a natural generalization of hypergraph acyclicity and which has been profitably used in other domains , e.g , constraint satisfaction and database query evaluation , to identify tractability islands for NP-hard problems .\nA hypertree for a hypergraph H is a triple ~ T , \u03c7 , \u03bb ~ , where T = ( N , E ) is a rooted tree , and \u03c7 and \u03bb are labelling functions which associate each vertex p \u2208 N with two sets \u03c7 ( p ) \u2286 N ( H ) and \u03bb ( p ) \u2286 E ( H ) .\nIf T ' = ( N ' , E ' ) is a subtree of T , we define \u03c7 ( T ' ) = UvCNi \u03c7 ( v ) .\nWe denote the set of vertices N of T by vertices ( T ) .\nMoreover , for any p \u2208 N , Tp denotes the subtree of T rooted at p.\nDEFINITION 1 .\nA hypertree decomposition of a hypergraph H is a hypertree HD = ~ T , \u03c7 , \u03bb ~ for H which satisfies all the following conditions : 1 .\nfor each edge h \u2208 E ( H ) , there exists p \u2208 vertices ( T ) such that h \u2286 \u03c7 ( p ) ( we say that p covers h ) ;\nFigure 3 : Example MaxWSP problem : ( a ) Hypergraph H1 ; ( b ) Hypergraph \u00af H1 ; ( b ) A 2-width hypertree decomposition of \u00af H1 .\n2 .\nfor each node Y \u2208 N ( H ) , the set { p \u2208 vertices ( T ) | Y \u2208 \u03c7 ( p ) } induces a ( connected ) subtree of T ; 3 .\nfor each p \u2208 vertices ( T ) , \u03c7 ( p ) C _ N ( \u03bb ( p ) ) ; 4 .\nfor each p \u2208 vertices ( T ) , N ( \u03bb ( p ) ) \u2229 \u03c7 ( Tp ) C _ \u03c7 ( p ) .\nThe width of a hypertree decomposition ( T , \u03c7 , \u03bb ) is maxp \u2208 vertices ( T ) | \u03bb ( p ) | .\nThe HYPERTREE width hw ( H ) of H is the minimum width over all its hypertree decompositions .\nA hypergraph H is acyclic if hw ( H ) = 1 .\n\u2737 EXAMPLE 3 .\nThe hypergraph H ~ I0 , B0 ~ reported in Figure 1 .\n( a ) is an example acyclic hypergraph .\nInstead , both the hypergraphs H1 and \u00af H1 shown in Figure 3 .\n( a ) and Figure 3 .\n( b ) , respectively , are not acyclic since their hypertree width is 2 .\nA 2-width hypertree decomposition for \u00af H1 is reported in Figure 3 .\n( c ) .\nIn particular , observe that H1 has been obtained by adding the two hyperedges h4 and h5 to H ~ I0 , B0 ~ to model , for instance , that two new bids , B4 and B5 , respectively , have been proposed to the auctioneer .\n\u2701 In the following , rather than working on the hypergraph H associated with a MaxWSP problem , we shall deal with its dual \u00af H , i.e. , with the hypergraph such that its nodes are in one-to-one correspondence with the hyperedges of H , and where for each node x \u2208 N ( H ) , { h | x \u2208 h \u2227 h \u2208 E ( H ) } is in E ( \u00af H ) .\nAs an example , the reader may want to check again the hypergraph H1 in Figure 3 .\n( a ) and notice that the hypergraph in Figure 3 .\n( b ) is in fact its dual .\nThe rationale for this choice is that issuing restrictions on the original hypergraph is a guarantee for the tractability only in very simple scenarios .\n3.1 Hypertree Decomposition on the Dual Hypergraph and Tractable Packing Problems\nFor a fixed constant k , let C ( hw , k ) denote the class of all the hypergraphs whose dual hypergraphs have hypertree width bounded by k .\nThe maximum weighted-set packing problem can be solved in polynomial time on the class C ( hw , k ) by means of the algorithm ComputeSetPackingk , shown in Figure 4 .\nThe algorithm receives in input a hypergraph H , a weighting function w , and a k-width hypertree decomposition HD = ( T = ( N , E ) , \u03c7 , \u03bb ) of \u00af H. For each vertex v \u2208 N , let Hv be the hypergraph whose set of nodes N ( Hv ) C _ N ( H ) coincides with \u03bb ( v ) , and whose set of edges E ( Hv ) C _ E ( H ) coincides with \u03c7 ( v ) .\nIn an initialization step , the algorithm equips each vertex v with all the possible packings for Hv , which are stored in the set Hv .\nNote that the size of Hv is bounded by ( | E ( H ) | + 1 ) k , since each node in \u03bb ( v ) is either left uncovered in a packing or is covered with precisely one of the hyperedges in \u03c7 ( v ) C _ E ( H ) .\nThen , ComputeSetPackingk is designed to filter these packings by retaining only those that `` conform '' with some packing for Hc , for each children c of v in T , as formalized next .\nLet hv and hc be two packings for Hv and Hc , respectively .\nWe say that hv conforms with hc , denoted by hv \u2248 hc if : for each h \u2208 hc \u2229 E ( Hv ) , h is in hv ; and , for each h \u2208 ( E ( Hc ) \u2212 hc ) , h is not in hv .\nEXAMPLE 4 .\nConsider again the hypertree decomposition of \u00af H1 reported in Figure 3 .\n( c ) .\nThen , the set of all the possible packings ( which are build in the initialization step of ComputeSetPackingk ) , for each of its vertices , is re\nFigure 5 : Example application of Algorithm ComputeSetPackingk .\nFigure 4 : Algorithm ComputeSetPackingk .\nMoreover , an arrow from a packing hc to hv denotes that hv conforms with hc .\nFor instance , the reader may check that the packing { h3 } \u2208 Hv1 conforms with the packing { h2 , h3 } \u2208 Hv3 , but do not conform with { h1 } \u2208 Hv3 .\n\u2701 ComputeSetPackingk builds a solution by traversing T in two phases .\nIn the first phase , vertices of T are processed from the leaves to the root r , by means of the procedure BottomUp .\nFor each node v being processed , the set Hv is preliminary updated by removing all the packings hv that do not conform with any packing for some of the children of v .\nAfter this filtering is performed , the weight $ hv is updated .\nIntuitively , $ vhv stores the weight of the best partial packing for H computed by using only the hyperedges occurring in \u03c7 ( Tv ) .\nIndeed , if v is a leaf , then $ vhv = w ( hv ) .\nOtherwise , for each child c of v in T , $ vhv is updated with the maximum of $ c hc \u2212 w ( hc \u2229 hv ) over all the packings hc that conforms with hv ( resolving ties arbitrarily ) .\nThe packing \u00af hc for which this maximum is achieved is stored in the variable hhv , c .\nIn the second phase , the tree T is processed starting from the root .\nFirstly , the packing h \u2217 is selected that maximizes the weight equipped with the packings in Hr .\nThen , procedure TopDown is used to extend h \u2217 to all the other partial packings for vertices of T .\nIn particular , at each vertex v , h \u2217 is extended with the packing hhv , c , for each child c of v. EXAMPLE 5 .\nAssume that , in our running example , w ( h1 ) = w ( h2 ) = w ( h3 ) = w ( h4 ) = 1 .\nThen , an execution of ComputeSetPackingk is graphically depicted in Figure 5 .\n( b ) , where an arrow from a packing hc to a packing hv is used to denote that hc = hhv , c. Specifically , the choices made during the computation are such that the packing { h2 , h3 } is computed .\nIn particular , during the bottom-up phase , we have that :\nand we set $ v1\ninstance , note that $ v1 { h5 } = 2 since { h5 } conforms with the packing { h4 } of Hv2 such that $ v2 { h4 } = 1 .\nThen , at the beginning of the top-down phase , ComputeSetPackingk selects { h3 } as a packing for Hv1 and propagates this choice in the tree .\nEquivalently , the algorithm may have chosen { h5 } .\nAs a further example , the way the solution { h1 } is obtained by the algorithm when w ( h1 ) = 5 and w ( h2 ) = w ( h3 ) = w ( h4 ) = 1 is reported in Figure 5 .\n( c ) .\nNotice that , this time , in the top-down phase , ComputeSetPackingk starts selecting { h1 } as the best packing for Hv1 .\n\u2701 THEOREM 4 .\nLet H be a hypergraph and w be a weighting function for it .\nLet HD = ( T , \u03c7 , \u03bb ) be a complete k-width hypertree decomposition of \u00af H. Then , ComputeSetPackingk on input H , w , and HD correctly outputs a solution for MaxWSP ( H , w ) in time O ( | T | \u00d7 ( | E ( H ) | + 1 ) 2k ) .\nPROOF .\n[ Sketch ] We observe that h \u2217 ( computed by ComputeSetPackingk ) is a packing for H. Indeed , consider a pair of hyperedges h1 and h2 in h \u2217 , and assume , for the sake of contradiction , that h1 \u2229 h2 = ~ \u2205 .\nLet v1 ( resp. , v2 ) be an arbitrary vertex of T , for which ComputeSetPackingk included h1 ( resp. , h2 ) in h \u2217 in the bottom-down computation .\nBy construction , we have h1 \u2208 \u03c7 ( v1 ) and h2 \u2208 \u03c7 ( v2 ) .\nLet I be an element in h1 \u2229 h2 .\nIn the dual hypergraph H , I is a hyperedge in E ( \u00af H ) which covers both the nodes h1 and h2 .\nHence , by condition ( 1 ) in Definition 1 , there is a vertex v \u2208 vertices ( T ) such that { h1 , h2 } \u2286 \u03c7 ( v ) .\nNote that , because of the connectedness condition in Definition 1 , we can also assume , w.l.o.g. , that v is in the path connecting v1 and v2 in T. Let hv \u2208 Hv denote the element added by ComputeSetPackingk into h \u2217 during the bottom-down phase .\nSince the elements in Hv are packings for Hv , it is the case that either h1 \u2208 hv or h2 \u2208 hv .\nAssume , w.l.o.g. , that h1 \u2208 ~ hv , and notice that each vertex w in T in the path connecting v to v1 is such that h1 \u2208 \u03c7 ( w ) , because of the connectedness condition .\nHence , because of definition of conformance , the packing hw selected by ComputeSetPackingk to be added at vertex w in h \u2217 must be such that h1 \u2208 ~ hw .\nThis holds in particular for w = v1 .\nContradiction with the definition of v1 .\nTherefore , h \u2217 is a packing for H .\nIt remains then to show that it has the maximum weight over all the packings for H. To this aim , we can use structural induction on T to prove that , in the bottom-up phase , the variable $ vhv is updated to contain the weight of the packing on the edges in \u03c7 ( Tv ) , which contains hv and which has the maximum weight over all such packings for the edges in \u03c7 ( Tv ) .\nThen , the result follows , since in the top-down phase , the packing hr giving the maximum weight over \u03c7 ( Tr ) = E ( H ) is first included in h \u2217 , and then extended at each node c with the packing hhv , c conformingly with hv and such that the maximum value of ~ vhv is achieved .\nAs for the complexity , observe that the initialization step requires the construction of the set Hv , for each vertex v , and each set has size ( | E ( H ) | + 1 ) k at most .\nThen , the function BottomUp checks for the conformance between strategies in Hv with strategies in Hc , for each pair ( v , c ) \u2208 E , and updates the weight rhv .\nThese tasks can be carried out in time O ( ( | E ( H ) | + 1 ) 2k ) and must be repeated for each edge in T , i.e. , O ( | T | ) times .\nFinally , the function TopDown can be implemented in linear time in the size of T , since it just requires updating h \u2217 by accessing the variable hhv , c .\nThe above result shows that if a hypertree decomposition of width k is given , the MaxWSP problem can be efficiently solved .\nMoreover , differently from the case of structured item graphs , it is well known that deciding the existence of a k-bounded hypertree decomposition and computing one ( if any ) are problems which can be efficiently solved in polynomial time [ 7 ] .\nTherefore , Theorem 4 witnesses that the class C ( hw , k ) actually constitutes a tractable class for the winner determination problem .\nAs the following theorem shows , for large subclasses ( that depend only on how the weight function is specified ) , MaxWSP ( H , w ) is even highly parallelizeable .\nLet us call a weighting function smooth if it is logspace computable and if all weights are polynomial ( and thus just require O ( log n ) bits for their representation ) .\nRecall that LOGCFL is a parallel complexity class contained in NC2 , cf. [ 9 ] .\nThe functional version of LOGCFL is LLOGCFL , which is obtained by equipping a logspace transducer with an oracle in LOGCFL .\nTHEOREM 5 .\nLet H be a hypergraph in C ( hw , k ) , and let w be a smooth weighting function for it .\nThen , MaxWSP ( H , w ) is in LLOGCFL .\n4 .\nHYPERTREE DECOMPOSITIONS VS STRUCTURED ITEM GRAPHS\nGiven that the class C ( hw , k ) has been shown to be an island of tractability for the winner determination problem , and given that the class C ( ig , k ) has been shown not to be efficiently recognizable , one may be inclined to think that there are instances having unbounded hypertree width , but admitting an item graph of bounded tree width ( so that the intractability of structured item graphs would lie in their generality ) .\nSurprisingly , we establish this is not the case .\nThe line of the proof is to first show that structured item graphs are in one-to-one correspondence with a special kind of hypertree decompositions of the dual hypergraph , which we shall call strict .\nThen , the result will follow by proving that k-width strict hypertree decompositions are less powerful than kwith hypertree decompositions .\n4.1 Strict Hypertree Decompositions\nLet H be a hypergraph , and let V \u2286 N ( H ) be a set of nodes and X , Y \u2208 N ( H ) .\nX is [ V ] - adjacent to Y if there exists an edge h \u2208 E ( H ) such that { X , Y } \u2286 ( h \u2212 V ) .\nA [ V ] - path \u03c0 from X to Y is a sequence X = X0 , ... , X cents = Y of variables such that : Xi is [ V ] - adjacent to Xi +1 , for each i \u2208 [ 0...2-1 ] .\nA set W \u2286 N ( H ) of nodes is [ V ] - connected if \u2200 X , Y \u2208 W there is a [ V ] - path from X to Y .\nA [ V ] - component is a maximal [ V ] - connected non-empty set of nodes W \u2286 ( N ( H ) \u2212 V ) .\nFor any [ V ] - component C , let E ( C ) = { h \u2208 E ( H ) | h \u2229 C = ~ \u2205 } .\nDEFINITION 2 .\nA hypertree decomposition HD = ~ T , \u03c7 , A ~ of H is strict if the following conditions hold : 1 .\nfor each pair of vertices r and s in vertices ( T ) such that s is a child of r , and for each [ \u03c7 ( r ) ] - component Cr s.t. Cr \u2229 \u03c7 ( Ts ) = ~ \u2205 , Cr is a [ \u03c7 ( r ) \u2229 N ( A ( r ) \u2229 A ( s ) ) ] - component ; 2 .\nfor each edge h \u2208 E ( H ) , there is a vertex p such that h \u2208 A ( p ) and h \u2286 \u03c7 ( p ) ( we say p strongly covers h ) ; 3 .\nfor each edge h \u2208 E ( H ) , the set { p \u2208 vertices ( T ) | h \u2208 A ( p ) } induces a ( connected ) subtree of T.\nThe strict hypertree width shw ( H ) of H is the minimum width over all its strict hypertree decompositions .\n\u2737 The basic relationship between nice hypertree decompositions and structured item graphs is shown in the following theorem .\nNote that , as far as the maximum weighted-set packing problem is concerned , given a hypergraph H , we can always assume that for each node v \u2208 N ( H ) , { v } is in E ( H ) .\nIn fact , if this hyperedge is not in the hypergraph , then it can be added without loss of generality , by setting w ( { v } ) = 0 .\nTherefore , letting C ( shw , k ) denote the class of all the hypergraphs whose dual hypergraphs ( associated with maximum 2The term `` +1 '' only plays the technical role of taking care of the different definition of width for tree decompositions and hypertree decompositions .\nweighted-set packing problems ) have strict hypertree width bounded by k , we have that C ( shw , k + 1 ) = C ( ig , k ) .\nBy definition , strict hypertree decompositions are special hypertree decompositions .\nIn fact , we are able to show that the additional conditions in Definition 2 induce an actual restriction on the decomposition power .\nTHEOREM 7 .\nC ( ig , k ) = C ( shw , k + 1 ) \u2282 C ( hw , k + 1 ) .\nA Game Theoretic View .\nWe shed further lights on strict hypertree decompositions by discussing an interesting characterization based on the strict Robber and Marshals Game , defined by adapting the Robber and Marshals game defined in [ 6 ] , which characterizes hypertree width .\nThe game is played on a hypergraph H by a robber against k marshals which act in coordination .\nMarshals move on the hyperedges of H , while the robber moves on nodes of H .\nThe robber sees where the marshals intend to move , and reacts by moving to another node which is connected with its current position and through a path in G ( H ) which does not use any node contained in a hyperedge that is occupied by the marshals before and after their move -- we say that these hyperedges are blocked .\nNote that in the basic game defined in [ 6 ] , the robber is not allowed to move on vertices that are occupied by the marshals before and after their move , even if they do not belong to blocked hyperedges .\nImportantly , marshals are required to play monotonically , i.e. , they can not occupy an edge that was previously occupied in the game , and which is currently not .\nThe marshals win the game if they capture the robber , by occupying an edge covering a node where the robber is .\nOtherwise , the robber wins .\nTHEOREM 8 .\nLet H be a hypergraph such that for each node v \u2208 N ( H ) , { v } is in E ( H ) .\nThen , H \u00af has a k-width strict hypertree decomposition if and only if k marshals can win the strict Robber and Marshals Game on \u00af H , no matter of the robber 's moves .\n5 .\nCONCLUSIONS\nWe have solved the open question of determining the complexity of computing a structured item graph associated with a combinatorial auction scenario .\nThe result is bad news , since it turned out that it is NP-complete to check whether a combinatorial auction has a structured item graph , even for treewidth 3 .\nMotivated by this result , we investigated the use of hypertree decomposition ( on the dual hypergraph associated with the scenario ) and we shown that the problem is tractable on the class of those instances whose dual hypergraphs have bounded hypertree width .\nFor some special , yet relevant cases , a highly parallelizable algorithm is also discussed .\nInterestingly , it also emerged that the class of structured item graphs is properly contained in the class of instances having bounded hypertree width ( hence , the reason of their intractability is not their generality ) .\nIn particular , the latter result is established by showing a precise relationship between structured item graphs and restricted forms of hypertree decompositions ( on the dual hypergraph ) , called query decompositions ( see , e.g. , [ 7 ] ) .\nIn the light of this observation , we note that proving some approximability results for structured item graphs requires a deep understanding of the approximability of query decompositions , which is currently missing in the literature .\nAs a further avenue of research , it would be relevant to enhance the algorithm ComputeSetPackingk , e.g. , by using specialized data structures , in order to avoid the quadratic dependency from ( | E ( H ) | + 1 ) k. Finally , an other interesting question is to assess whether the structural decomposition techniques discussed in the paper can be used to efficiently deal with generalizations of the winner determination problem .\nFor instance , it might be relevant in several application scenarios to design algorithms that can find a selling strategy when several copies of the same item are available for selling , and when moreover the auctioneer is satisfied when at least a given number of copies is actually sold ."} {"id": "C-23", "title": "", "abstract": "", "keyphrases": ["distribut resourc", "data grid applic", "replic", "co-alloc", "larg dataset", "resourc manag protocol", "replica", "co-alloc strategi", "server", "perform", "grid comput", "data grid", "replica select", "data transfer", "globu", "gridftp"], "prmu": [], "lvl-1": "Implementation of a Dynamic Adjustment Mechanism with Efficient Replica Selection in Data Grid Environments Chao-Tung Yang I-Hsien Yang Chun-Hsiang Chen Shih-Yu Wang High-Performance Computing Laboratory Department of Computer Science and Information Engineering Tunghai University Taichung City, 40704, Taiwan R.O.C. ctyang@thu.edu.tw g932813@thu.edu.tw ABSTRACT The co-allocation architecture was developed in order to enable parallel downloading of datasets from multiple servers.\nSeveral co-allocation strategies have been coupled and used to exploit rate differences among various client-server links and to address dynamic rate fluctuations by dividing files into multiple blocks of equal sizes.\nHowever, a major obstacle, the idle time of faster servers having to wait for the slowest server to deliver the final block, makes it important to reduce differences in finishing time among replica servers.\nIn this paper, we propose a dynamic coallocation scheme, namely Recursive-Adjustment Co-Allocation scheme, to improve the performance of data transfer in Data Grids.\nOur approach reduces the idle time spent waiting for the slowest server and decreases data transfer completion time.\nWe also provide an effective scheme for reducing the cost of reassembling data blocks.\nCategories and Subject Descriptors C.2.4 [Distributed Systems]: Distributed applications.\nH.3.5 [Online Information Services]: Data sharing, Web-based services.\nGeneral Terms Management, Performance, Design, Experimentation.\n1.\nINTRODUCTION Data Grids aggregate distributed resources for solving large-size dataset management problems.\nMost Data Grid applications execute simultaneously and access large numbers of data files in the Grid environment.\nCertain data-intensive scientific applications, such as high-energy physics, bioinformatics applications and virtual astrophysical observatories, entail huge amounts of data that require data file management systems to replicate files and manage data transfers and distributed data access.\nThe data grid infrastructure integrates data storage devices and data management services into the grid environment, which consists of scattered computing and storage resources, perhaps located in different countries/regions yet accessible to users [12].\nReplicating popular content in distributed servers is widely used in practice [14, 17, 19].\nRecently, large-scale, data-sharing scientific communities such as those described in [1, 5] used this technology to replicate their large datasets over several sites.\nDownloading large datasets from several replica locations may result in varied performance rates, because the replica sites may have different architectures, system loadings, and network connectivity.\nBandwidth quality is the most important factor affecting transfers between clients and servers since download speeds are limited by the bandwidth traffic congestion in the links connecting the servers to the clients.\nOne way to improve download speeds is to determine the best replica locations using replica selection techniques [19].\nThis method selects the best servers to provide optimum transfer rates because bandwidth quality can vary unpredictably due to the sharing nature of the internet.\nAnother way is to use co-allocation technology [17] to download data.\nCo-allocation of data transfers enables the clients to download data from multiple locations by establishing multiple connections in parallel.\nThis can improve the performance compared to the single-server cases and alleviate the internet congestion problem [17].\nSeveral co-allocation strategies were provided in previous work [17].\nAn idle-time drawback remains since faster servers must wait for the slowest server to deliver its final block.\nTherefore, it is important to reduce the differences in finishing time among replica servers.\nIn this paper, we propose a dynamic co-allocation scheme based on co-allocation Grid data transfer architecture called RecursiveAdjustment Co-Allocation scheme that reduces the idle time spent waiting for the slowest server and improves data transfer performance [24].\nExperimental results show that our approach is superior to previous methods and achieved the best overall performance.\nWe also discuss combination cost and provide an effective scheme for reducing it.\nThe remainder of this paper is organized as follows.\nRelated background review and studies are presented in Section 2 and the co-allocation architecture and related work are introduced in Section 3.\nIn Section 4, an efficient replica selection service is proposed by us.\nOur research approaches are outlined in Section 5, and experimental results and a performance evaluation of our scheme are presented in Section 6.\nSection 7 concludes this research paper.\n2.\nBACKGROUND 2.1 Data Grid The Data Grids enable the sharing, selection, and connection of a wide variety of geographically distributed computational and storage resources for solving large-scale data intensive scientific applications (e.g., high energy physics, bioinformatics applications, and astrophysical virtual observatory).\nThe term Data Grid traditionally represents the network of distributed storage resources, from archival systems to caches and databases, which are linked using a logical name space to create global, persistent identifiers and provide uniform access mechanisms [4].\nData Grids [1, 2, 16] federate a lot of storage resources.\nLarge collections of measured or computed data are emerging as important resources in many data intensive applications.\n2.1.1 Replica Management Replica management involves creating or removing replicas at a data grid site [19].\nIn other words, the role of a replica manager is to create or delete replicas, within specified storage systems.\nMost often, these replicas are exact copies of the original files, created only to harness certain performance benefits.\nA replica manager typically maintains a replica catalog containing replica site addresses and the file instances.\nThe replica management service is responsible for managing the replication of complete and partial copies of datasets, defined as collections of files.\nThe replica management service is just one component in a Data Grid environment that provides support for high-performance, data-intensive applications.\nA replica or location is a subset of a collection that is stored on a particular physical storage system.\nThere may be multiple possibly overlapping subsets of a collection stored on multiple storage systems in a Data Grid.\nThese Grid storage systems may use a variety of underlying storage technologies and data movement protocols, which are independent of replica management.\n2.1.2 Replica Catalog As mentioned above, the purpose of the replica catalog is to provide mappings between logical names for files or collections and one or more copies of the objects on physical storage systems.\nThe replica catalog includes optional entries that describe individual logical files.\nLogical files are entities with globally unique names that may have one or more physical instances.\nThe catalog may optionally contain one logical file entry in the replica catalog for each logical file in a collection.\nA Data Grid may contain multiple replica catalogs.\nFor example, a community of researchers interested in a particular research topic might maintain a replica catalog for a collection of data sets of mutual interest.\nIt is possible to create hierarchies of replica catalogs to impose a directory-like structure on related logical collections.\nIn addition, the replica manager can perform access control on entire catalogs as well as on individual logical files.\n2.1.3 Replica Selection The purpose of replica selection [16] is to select a replica from among the sites which constitute a Data Grid [19].\nThe criteria of selection depend on characteristics of the application.\nBy using this mechanism, users of the Data Grid can easily manage replicas of data sets at their sites, with better performance.\nMuch previous effort has been devoted to the replica selection problem.\nThe common process of replica selection consists of three steps: data preparation, preprocessing and prediction.\nThen, applications can select a replica according to its specific attributes.\nReplica selection is important to data-intensive applications, and it can provide location transparency.\nWhen a user requests for accessing a data set, the system determines an appropriate way to deliver the replica to the user.\n2.2 Globus Toolkit and GridFTP The Globus Project [9, 11, 16] provides software tools collectively called The Globus Toolkit that makes it easier to build computational Grids and Grid-based applications.\nMany organizations use the Globus Toolkit to build computational Grids to support their applications.\nThe composition of the Globus Toolkit can be pictured as three pillars: Resource Management, Information Services, and Data Management.\nEach pillar represents a primary component of the Globus Toolkit and makes use of a common foundation of security.\nGRAM implements a resource management protocol, MDS implements an information services protocol, and GridFTP implements a data transfer protocol.\nThey all use the GSI security protocol at the connection layer [10, 11, 16, 13].\nThe Globus alliance proposed a common data transfer and access protocol called GridFTP that provides secure, efficient data movement in Grid environments [3].\nThis protocol, which extends the standard FTP protocol, provides a superset of the features offered by the various Grid storage systems currently in use.\nIn order to solve the appearing problems, the Data Grid community tries to develop a secure, efficient data transport mechanism and replica management services.\nGridFTP is a reliable, secure and efficient data transport protocol which is developed as a part of the Globus project.\nThere is another key technology from Globus project, called replica catalog [16] which is used to register and manage complete and partial copies of data sets.\nThe replica catalog contains the mapping information from a logical file or collection to one or more physical files.\n2.3 Network Weather Service The Network Weather Service (NWS) [22] is a generalized and distributed monitoring system for producing short-term performance forecasts based on historical performance measurements.\nThe goal of the system is to dynamically characterize and forecast the performance deliverable at the application level from a set of network and computational resources.\nA typical installation involves one nws_nameserver, one or more nws_memory (which may reside on different machines), and an nws_sensor running on each machine with resources which are to be monitored.\nThe system includes sensors for end-to-end TCP/IP performance (bandwidth and latency), available CPU percentage, and available non-paged memory.\n798 2.4 Sysstat Utilities The Sysstat [15] utilities are a collection of performance monitoring tools for the Linux OS.\nThe Sysstat package incorporates the sar, mpstat, and iostat commands.\nThe sar command collects and reports system activity information, which can also be saved in a system activity file for future inspection.\nThe iostat command reports CPU statistics and I/O statistics for tty devices and disks.\nThe statistics reported by sar concern I/O transfer rates, paging activity, process-related activities, interrupts, network activity, memory and swap space utilization, CPU utilization, kernel activities, and tty statistics, among others.\nUniprocessor (UP) and Symmetric multiprocessor (SMP) machines are fully supported.\n3.\nCO-ALLOCATION ARCHITECTURE AND RELATED WORK The co-allocation architecture proposed in [17] consists of three main components: an information service, a broker/co-allocator, and local storage systems.\nFigure 1 shows the co-allocation of Grid Data transfers, which is an extension of the basic template for resource management [7] provided by Globus Toolkit.\nApplications specify the characteristics of desired data and pass the attribute description to a broker.\nThe broker queries available resources and gets replica locations from information services [6] and replica management services [19], and then gets a list of physical locations for the desired files.\nFigure 1.\nData Grid Co-Allocation Architecture [17] The candidate replica locations are passed to a replica selection service [19], which was presented in a previous work [23].\nThis replica selection service provides estimates of candidate transfer performance based on a cost model and chooses appropriate amounts to request from the better locations.\nThe co-allocation agent then downloads the data in parallel from the selected servers.\nIn these researches, GridFTP [1, 11, 16] was used to enable parallel data transfers.\nGridFTP is a high-performance, secure, reliable data transfer protocol optimized for high-bandwidth widearea networks.\nAmong its many features are security, parallel streams, partial file transfers, third-party transfers, and reusable data channels.\nIts partial file transfer ability allows files to be retrieved from data servers by specifying the start and end offsets of file sections.\nData grids consist of scattered computing and storage resources located in different countries/regions yet accessible to users [8].\nIn this study we used the grid middleware Globus Toolkit [16] as the data grid infrastructure.\nThe Globus Toolkit provides solutions for such considerations as security, resource management, data management, and information services.\nOne of its primary components is MDS [6, 11, 16, 25], which is designed to provide a standard mechanism for discovering and publishing resource status and configuration information.\nIt provides a uniform and flexible interface for data collected by lower-level information providers in two modes: static (e.g., OS, CPU types, and system architectures) and dynamic data (e.g., disk availability, memory availability, and loading).\nAnd it uses GridFTP [1, 11, 16], a reliable, secure, and efficient data transport protocol to provide efficient management and transfer of terabytes or petabytes of data in a wide-area, distributed-resource environment.\nAs datasets are replicated within Grid environments for reliability and performance, clients require the abilities to discover existing data replicas, and create and register new replicas.\nA Replica Location Service (RLS) [4] provides a mechanism for discovering and registering existing replicas.\nSeveral prediction metrics have been developed to help replica selection.\nFor instance, Vazhkudai and Schopf [18, 20, 21] used past data transfer histories to estimate current data transfer throughputs.\nIn our previous work [23, 24], we proposed a replica selection cost model and a replica selection service to perform replica selection.\nIn [17], the author proposes co-allocation architecture for co-allocating Grid data transfers across multiple connections by exploiting the partial copy feature of GridFTP.\nIt also provides Brute-Force, History-Base, and Dynamic Load Balancing for allocating data block.\nBrute-Force Co-Allocation: Brute-Force Co-Allocation works by dividing the file size equally among available flows.\nIt does not address the bandwidth differences among the various client-server links.\nHistory-based Co-Allocation: The History-based CoAllocation scheme keeps block sizes per flow proportional to predicted transfer rates.\nConservative Load Balancing: One of their dynamic coallocation is Conservative Load Balancing.\nThe Conservative Load Balancing dynamic co-allocation strategy divides requested datasets into k disjoint blocks of equal size.\nAvailable servers are assigned single blocks to deliver in parallel.\nWhen a server finishes delivering a block, another is requested, and so on, till the entire file is downloaded.\nThe loadings on the co-allocated flows are automatically adjusted because the faster servers will deliver more quickly providing larger portions of the file.\nAggressive Load Balancing: Another dynamic coallocation strategy, presented in [17], is the Aggressive Load Balancing.\nThe Aggressive Load Balancing dynamic co-allocation strategy presented in [17] adds functions that change block size de-liveries by: (1) progressively increasing the amounts of data requested from faster servers, and (2) reducing the amounts of data requested from slower servers or ceasing to request data from them altogether.\nThe co-allocation strategies described above do not handle the shortcoming of faster servers having to wait for the slowest server to deliver its final block.\nIn most cases, this wastes much time and decreases overall performance.\nThus, we propose an efficient approach called Recursive-Adjustment Co-Allocation and based 799 on a co-allocation architecture.\nIt improves dynamic co-allocation and reduces waiting time, thus improving overall transfer performance.\n4.\nAN EFFICIENT REPLICA SELECTION SERVICE We constructed a replica selection service to enable clients to select the better replica servers in Data Grid environments.\nSee below for a detailed description.\n4.1 Replica Selection Scenario Our proposed replica selection model is illustrated in [23], which shows how a client identifies the best location for a desired replica transfer.\nThe client first logins in at a local site and executes the Data Grid platform application, which checks to see if the files are available at the local site.\nIf they are present at the local site, the application accesses them immediately; otherwise, it passes the logical file names to the replica catalog server, which returns a list of physical locations for all registered copies.\nThe application passes this list of replica locations to a replica selection server, which identifies the storage system destination locations for all candidate data transfer operations.\nThe replica selection server sends the possible destination locations to the information server, which provides performance measurements and predictions of the three system factors described below.\nThe replica selection server chooses better replica locations according to these estimates and returns location information to the transfer application, which receives the replica through GridFTP.\nWhen the application finishes, it returns the results to the user.\n4.2 System Factors Determining the best database from many with the same replications is a significant problem.\nIn our model, we consider three system factors that affect replica selection: Network bandwidth: This is one of the most significant Data Grid factors since data files in Data Grid environments are usually very large.\nIn other words, data file transfer times are tightly dependent on network bandwidth situations.\nBecause network bandwidth is an unstable dynamic factor, we must measure it frequently and predict it as accurately as possible.\nThe Network Weather Service (NWS) is a powerful toolkit for this purpose.\nCPU load: Grid platforms consist of numbers of heterogeneous systems, built with different system architectures, e.g., cluster platforms, supercomputers, PCs.\nCPU loading is a dynamic system factor, and a heavy system CPU load will certainly affect data file downloads process from the site.\nThe measurement of it is done by the Globus Toolkit / MDS.\nI/O state: Data Grid nodes consist of different heterogeneous storage systems.\nData files in Data Grids are huge.\nIf the I/O state of a site that we wish to download files from is very busy, it will directly affect data transfer performance.\nWe measure I/O states using sysstat [15] utilities.\n4.3 Our Replica Selection Cost Model The target function of a cost model for distributed and replicated data storage is the information score from the information service.\nWe listed some influencing factors for our cost model in the preceding section.\nHowever, we must express these factors in mathematical notation for further analysis.\nWe assume node i is the local site the user or application logs in on, and node j possesses the replica the user or application wants.\nThe seven system parameters our replica selection cost model considers are: Scorei-j: the score value represents how efficiently a user or application at node i can acquire a replica from node j BW jiP : percentage of bandwidth available from node i to node j; current bandwidth divided by highest theoretical bandwidth BBW : network bandwidth weight defined by the Data Grid administrator CPU jP : percentage of node j CPU idle states WCPU : CPU load weight defined by the Data Grid administrator OI jP / : percentage of node j I/O idle states WI/O : I/O state weight defined by the Data Grid administrator We define the following general formula using these system factors.\nOIOI j CPUCPU j BWBW jiji WPWPWPScore // (1) The three influencing factors in this formula: WBW , WCPU , and WI/O describe CPU, I/O, and network bandwidth weights, which can be determined by Data Grid organization administrators according to the various attributes of the storage systems in Data Grid nodes since some storage equipment does not affect CPU loading.\nAfter several experimental measurements, we determined that network bandwidth is the most significant factor directly influencing data transfer times.\nWhen we performed data transfers using the GridFTP protocol we discovered that CPU and I/O statuses slightly affect data transfer performance.\nTheir respective values in our Data Grid environment are 80%, 10%, and 10%.\n4.4 Co-Allocation Cost Analysis When clients download datasets using GridFTP co-allocation technology, three time costs are incurred: the time required for client authentication to the GridFTP server, actual data transmission time, and data block reassembly time.\nAuthentication Time: Before a transfer, the client must load a Globus proxy and authenticate itself to the GridFTP server with specified user credentials.\nThe client then establishes a control channel, sets up transfer parameters, and requests data channel creation.\nWhen the channel has been established, the data begins flowing.\nTransmission Time: Transmission time is measured from the time when the client starts transferring to the time when all transmission jobs are finished, and it includes the time 800 required for resetting data channels between transfer requests.\nData pathways need be opened only once and may handle many transfers before being closed.\nThis allows the same data pathways to be used for multiple file transfers.\nHowever, data channels must be explicitly reset between transfer requests.\nThis is less time-costly.\nCombination Time: Co-allocation architecture exploits the partial copy feature of the GridFTP data movement tool to enable data transfers across multiple connections.\nWith partial file transfer, file sections can be retrieved from data servers by specifying only the section start and end offsets.\nWhen these file sections are delivered, they may need to be reassembled; the reassembly operation incurs an additional time cost.\n5.\nDYNAMIC CO-ALLOCATION STRATEGY Dynamic co-allocation, described above, is the most efficient approach to reducing the influence of network variations between clients and servers.\nHowever, the idle time of faster servers awaiting the slowest server to deliver the last block is still a major factor affecting overall efficiency, which Conservative Load Balancing and Aggressive Load Balancing [17] cannot effectively avoid.\nThe approach proposed in the present paper, a dynamic allocation mechanism called Recursive-Adjustment CoAllocation can overcome this, and thus, improve data transfer performance.\n5.1 Recursive-Adjustment Co-Allocation Recursive-Adjustment Co-Allocation works by continuously adjusting each replica server``s workload to correspond to its realtime bandwidth during file transfers.\nThe goal is to make the expected finish time of all servers the same.\nAs Figure 2 shows, when an appropriate file section is first selected, it is divided into proper block sizes according to the respective server bandwidths.\nThe co-allocator then assigns the blocks to servers for transfer.\nAt this moment, it is expected that the transfer finish time will be consistent at E(T1).\nHowever, since server bandwidths may fluctuate during segment deliveries, actual completion time may be dissimilar (solid line, in Figure 2).\nOnce the quickest server finishes its work at time T1, the next section is assigned to the servers again.\nThis allows each server to finish its assigned workload by the expected time at E(T2).\nThese adjustments are repeated until the entire file transfer is finished.\nServer 1 Server 2 Server 3 Round 1 Round 2 E(T1) E(T2)T1 File A Section 1 Section 2 ... ... ... Figure 2.\nThe adjustment process The Recursive-Adjustment Co-Allocation process is illustrated in Figure 3.\nWhen a user requests file A, the replica selection service responds with the subset of all available servers defined by the maximum performance matrix.\nThe co-allocation service gets this list of selected replica servers.\nAssuming n replica servers are selected, Si denotes server i such that 1 i n.\nA connection for file downloading is then built to each server.\nThe RecursiveAdjustment Co-Allocation process is as follows.\nA new section of a file to be allocated is first defined.\nThe section size, SEj, is: SEj = UnassignedFileSize , (0 < < 1) (2) where SEj denotes the section j such that 1 j k, assuming we allocate k times for the download process.\nAnd thus, there are k sections, while Tj denotes the time section j allocated.\nUnassignedFileSize is the portion of file A not yet distributed for downloading; initially, UnassignedFileSize is equal to the total size of file A. is the rate that determines how much of the section remains to be assigned.\nFigure 3.\nThe Recursive-Adjustment Co-Allocation process.\nIn the next step, SEj is divided into several blocks and assigned to n servers.\nEach server has a real-time transfer rate to the client of Bi, which is measured by the Network Weather Service (NWS) [18].\nThe block size per flow from SEj for each server i at time Tj is: i n i ii n i iji zeUnFinishSiBBzeUnFinishSiSES -)( 11 (3) where UnFinishSizei denotes the size of unfinished transfer blocks that is assigned in previous rounds at server i. UnFinishSizei is equal to zero in first round.\nIdeally, depending to the real time bandwidth at time Tj, every flow is expected to finish its workload in future.\nThis fulfills our requirement to minimize the time faster servers must wait for the slowest server to finish.\nIf, in some cases, network variations greatly degrade transfer rates, UnFinishSizei may exceed n i ii n i ij BBzeUnFinishSiSE 11 *)( , which is the total block size expected to be transferred after Tj.\nIn such cases, the co-allocator eliminates the servers in advance and assigns SEj to other servers.\nAfter allocation, all channels continue transferring data blocks.\nWhen a faster channel finishes its assigned data blocks, the co-allocator begins allocating an unassigned section of file A again.\nThe process of allocating data 801 blocks to adjust expected flow finish time continues until the entire file has been allocated.\n5.2 Determining When to Stop Continuous Adjustment Our approach gets new sections from whole files by dividing unassigned file ranges in each round of allocation.\nThese unassigned portions of the file ranges become smaller after each allocation.\nSince adjustment is continuous, it would run as an endless loop if not limited by a stop condition.\nHowever, when is it appropriate to stop continuous adjustment?\nWe provide two monitoring criteria, LeastSize and ExpectFinishedTime, to enable users to define stop thresholds.\nWhen a threshold is reached, the co-allocation server stopped dividing the remainder of the file and assigns that remainder as the final section.\nThe LeastSize criterion specifies the smallest file we want to process, and when the unassigned portion of UnassignedFileSize drops below the LeastSize specification, division stops.\nExpectFinishedTime criterion specifies the remaining time transfer is expected to take.\nWhen the expected transfer time of the unassigned portion of a file drops below the time specified by ExpectFinishedTime, file division stops.\nThe expected rest time value is determined by: 1 n i iBFileSizeUnAssigned (4) These two criteria determine the final section size allocated.\nHigher threshold values will induce fewer divisions and yield lower co-allocation costs, which include establishing connections, negotiation, reassembly, etc..\nHowever, although the total coallocation adjustment time may be lower, bandwidth variations may also exert more influence.\nBy contrast, lower threshold values will induce more frequent dynamic server workload adjustments and, in the case of greater network fluctuations, result in fewer differences in server transfer finish time.\nHowever, lower values will also increase co-allocation times, and hence, increase co-allocation costs.\nTherefore, the internet environment, transferred file sizes, and co-allocation costs should all be considered in determining optimum thresholds.\n5.3 Reducing the Reassembly Overhead The process of reassembling blocks after data transfers using coallocation technology results in additional overhead and decreases overall performance.\nThe reassembly overhead is related to total block size, and could be reduced by upgrading hardware capabilities or using better software algorithms.\nWe propose an efficient alternative reassembly mechanism to reduce the added combination overhead after all block transmissions are finished.\nIt differs from the conventional method in which the software starts assembly after all blocks have been delivered by starting to assemble blocks once the first deliveries finish.\nOf course, this makes it necessary to maintain the original splitting order.\nCo-allocation strategies such as Conservative Load Balancing and Recursive-Adjustment Co-Allocation produce additional blocks during file transfers and can benefit from enabling reassembly during data transfers.\nIf some blocks are assembled in advance, the time cost for assembling the blocks remaining after all transfers finish can be reduced.\n6.\nEXPERIMENTAL RESULTS AND ANALYSIS In this section, we discuss the performance of our RecursiveAdjustment Co-Allocation strategy.\nWe evaluate four coallocation schemes: (1) Brute-Force (Brute), (2) History-based (History), (3) Conservative Load Balancing (Conservative) and (4) Recursive-Adjustment Co-Allocation (Recursive).\nWe analyze the performance of each scheme by comparing their transfer finish time, and the total idle time faster servers spent waiting for the slowest server to finish delivering the last block.\nWe also analyze the overall performances in the various cases.\nWe performed wide-area data transfer experiments using our GridFTP GUI client tool.\nWe executed our co-allocation client tool on our testbed at Tunghai University (THU), Taichung City, Taiwan, and fetched files from four selected replica servers: one at Providence University (PU), one at Li-Zen High School (LZ), one at Hsiuping Institute of Technology School (HIT), and one at Da-Li High School (DL).\nAll these institutions are in Taiwan, and each is at least 10 Km from THU..\nFigure 4 shows our Data Grid testbed.\nOur servers have Globus 3.0.2 or above installed.\nInternet THU Li-Zen High School (LZ) HITCeleron 900 MHz 256 MB RAM 60 GB HD AMD Athlon(tm) XP 2400+ 1024 MB RAM 120 GB HD Pentium 4 2.8 GHz 512 MB RAM 80 GB HD PU Da-Li High School (DL) Athlon MP 2000 MHz *2 1 GB RAM 60 GB HD Pentium 4 1.8 GHZ 128 MB RAM 40 GB HD Pentium 4 2.5 GHZ 512 MB RAM 80 GB HD Figure 4.\nOur Data Grid testbed In the following experiments, we set = 0.5, the LeastSize threshold to 10MB, and experimented with file sizes of 10 MB, 50MB, 100MB, 500MB, 1000MB, 2000MB, and 4000MB.\nFor comparison, we measured the performance of Conservative Load Balancing on each size using the same block numbers.\nFigure 5 shows a snapshot of our GridFTP client tool.\nThis client tool is developed by using Java CoG.\nIt allows easier and more rapid application development by encouraging collaborative code reuse and avoiding duplication of effort among problem-solving environments, science portals, Grid middleware, and collaborative pilots.\nTable 1 shows average transmission rates between THU and each replica server.\nThese numbers were obtained by transferring files of 500MB, 1000MB, and 2000MB from a single replica server using our GridFTP client tool, and each number is an average over several runs.\nTable 1.\nGridFTP end-to-end transmission rate from THU to various servers Server Average transmission rate HIT 61.5 Mbps LZ 59.5 Mbps DL 32.1 Mbps PU 26.7 Mbps 802 Figure 5.\nOur GridFTP client tool We analyzed the effect of faster servers waiting for the slowest server to deliver the last block for each scheme.\nFigure 6(a) shows total idle time for various file sizes.\nNote that our RecursiveAdjustment Co-Allocation scheme achieved significant performance improvements over other schemes for every file size.\nThese results demonstrate that our approach efficiently reduces the differences in servers finish times.\nThe experimental results shown in Figure 6(b) indicate that our scheme beginning block reassembly as soon as the first blocks have been completely delivered reduces combination time, thus aiding co-allocation strategies like Conservative Load Balancing and RecursiveAdjustment Co-Allocation that produce more blocks during data transfers.\nFigure 7 shows total completion time experimental results in a detailed cost structure view.\nServers were at PU, DL, and HIT, with the client at THU..\nThe first three bars for each file size denote the time to download the entire file from single server, while the other bars show co-allocated downloads using all three servers.\nOur co-allocation scheme finished the job faster than the other co-allocation strategies.\nThus, we may infer that the main gains our technology offers are lower transmission and combination times than other co-allocation strategies.\n0 20 40 60 80 100 120 140 160 180 200 100\u00a0500\u00a01000\u00a01500 2000 File Size (MB) WaitTime(Sec) Brute3 History3 Conservative3 Recursive3 0 10 20 30 40 50 60 70 80 90 100 500\u00a01000\u00a01500\u00a02000 File Size (MB) CombinationTime(Sec) Brute3 History3 Conservative3 Recursive3 Figure 6.\n(a) Idle times for various methods; servers are at PU, DL, and HIT.\n(b) Combination times for various methods; servers are at PU, DL, and HIT.\nIn the next experiment, we used the Recursive-Adjustment CoAllocation strategy with various sets of replica servers and measured overall performances, where overall performance is: Total Performance = File size/Total Completion Time (5) Table 2 lists all experiments we performed and the sets of replica servers used.\nThe results in Figure 8(a) show that using coallocation technologies yielded no improvement for smaller file sizes such as 10MB.\nThey also show that in most cases, overall performance increased as the number of co-allocated flows increased.\nWe observed that for our testbed and our co-allocation technology, overall performance reached its highest value in the REC3_2 case.\nHowever, in the REC4 case, when we added one flow to the set of replica servers, the performance did not increase.\nOn the contrary, it decreased.\nWe can infer that the co-allocation efficiency reached saturation in the REC3_2 case, and that additional flows caused additional overhead and reduced overall performance.\nThis means that more download flows do not necessarily result in higher performance.\nWe must choose appropriate numbers of flows to achieve optimum performance.\nWe show the detailed cost structure view for the case of REC3_2 and the case of REC4 in Figure 8(b).\nThe detailed cost consists of authentication time, transfer time and combination time.\n0 100 200 300 400 500 600 PU1 DL1 HIT1 BRU3 HIS3 CON3 REC3 PU1 DL1 HIT1 BRU3 HIS3 CON3 REC3 PU1 DL1 HIT1 BRU3 HIS3 CON3 REC3 PU1 DL1 HIT1 BRU3 HIS3 CON3 REC3 500\u00a01000\u00a01500\u00a02000 File Size (MB) CompletionTime(Sec) Authentication Time Transmission Time Combination Time Figure 7.\nCompletion times for various methods; servers are at PU, DL, and HIT.\nTable 2.\nThe sets of replica servers for all cases Case Servers PU1 PU DL1 DL REC2 PU, DL REC3_1 PU, DL, LZ REC3_2 PU, DL, HIT REC4 PU, DL, HIT, LZ 0 10 20 30 40 50 60 70 10\u00a050\u00a0100\u00a0500 1000\u00a01500\u00a02000 File Size (MB) OverallPerformance(Mbits) PU1 DL1 REC2 REC3_1 REC3_2 REC4 0 10 20 30 40 50 60 70 REC3_2 REC4 REC3_2 REC4 REC3_2 REC4 REC3_2 REC4 REC3_2 REC4 REC3_2 REC4 REC3_2 REC4 10\u00a050\u00a0100\u00a0500 1000\u00a01500\u00a02000 File Size (MB) OverallPerformance(Mbits) Authentication Time Transmission Time Combination Time Figure 8.\n(a) Overall performances for various sets of servers.\n(b) Detailed cost structure view for the case of REC3_2 and the case of REC4.\n7.\nCONCLUSIONS The co-allocation architecture provides a coordinated agent for assigning data blocks.\nA previous work showed that the dynamic co-allocation scheme leads to performance improvements.\nHowever, it cannot handle the idle time of faster servers, which must wait for the slowest server to deliver its final block.\nWe proposed the Recursive-Adjustment Co-Allocation scheme to improve data transfer performances using the co-allocation architecture in [17].\nIn this approach, the workloads of selected replica servers are continuously adjusted during data transfers, and we provide a function that enables users to define a final 803 block threshold, according to their data grid environment.\nExperimental results show the effectiveness of our proposed technique in improving transfer time and reducing overall idle time spent waiting for the slowest server.\nWe also discussed the re-combination cost and provided an effective scheme for reducing it.\n8.\nREFERENCES [1] B. Allcock, J. Bester, J. Bresnahan, A. Chervenak, I. Foster, C. Kesselman, S. Meder, V. Nefedova, D. Quesnel, and S. Tuecke, Data Management and Transfer in HighPerformance Computational Grid Environments, Parallel Computing, 28(5):749-771, May 2002.\n[2] B. Allcock, J. Bester, J. Bresnahan, A. Chervenak, I. Foster, C. Kesselman, S. Meder, V. Nefedova, D. Quesnel, and S. Tuecke, Secure, Efficient Data Transport and Replica Management for High-Performance Data-Intensive Computing, Proc.\nof the Eighteenth IEEE Symposium on Mass Storage Systems and Technologies, pp. 13-28, 2001.\n[3] B. Allcock, S. Tuecke, I. Foster, A. Chervenak, and C. Kesselman.\nProtocols and Services for Distributed DataIntensive Science.\nACAT2000 Proceedings, pp. 161-163, 2000.\n[4] A. Chervenak, E. Deelman, I. Foster, L. Guy, W. Hoschek, A. Iamnitchi, C. Kesselman, P. Kunszt, and M. Ripeanu, Giggle: A Framework for Constructing Scalable Replica Location Services, Proc.\nof SC 2002, Baltimore, MD, 2002.\n[5] A. Chervenak, I. Foster, C. Kesselman, C. Salisbury, and S. Tuecke, The Data Grid: Towards an Architecture for the Distributed Management and Analysis of Large Scientific Datasets, Journal of Network and Computer Applications, 23:187-200, 2001.\n[6] K. Czajkowski, S. Fitzgerald, I. Foster, and C. Kesselman, Grid Information Services for Distributed Resource Sharing, Proc.\nof the Tenth IEEE International Symposium on High-Performance Distributed Computing (HPDC-10``01), 181-194, August 2001.\n[7] K. Czajkowski, I. Foster, and C. Kesselman.\nResource CoAllocation in Computational Grids, Proc.\nof the Eighth IEEE International Symposium on High Performance Distributed Computing (HPDC-8``99), August 1999.\n[8] F. Donno, L. Gaido, A. Ghiselli, F. Prelz, and M. Sgaravatto, DataGrid Prototype 1, TERENA Networking Conference, http://www.terena.nl/conferences/tnc2002/Papers/p5a2ghiselli.pdf, June 2002, [9] I. Foster, C. Kesselman, and S. Tuecke.\nThe Anatomy of the Grid: Enabling Scalable Virtual Organizations.\nInt.\nJ. of Supercomputer Applications and High Performance Computing, 15(3), pp. 200-222, 2001.\n[10] I. Foster and C. Kesselman, Globus: A Metacomputing Infrastructure Toolkit, Intl J. Supercomputer Applications, 11(2), pp. 115-128, 1997.\n[11] Global Grid Forum, http://www.ggf.org/ [12] W. Hoschek, J. Jaen-Martinez, A. Samar, H. Stockinger, and K. Stockinger, Data Management in an International Data Grid Project, Proc.\nof First IEEE/ACM International Workshop on Grid Computing - Grid 2000, Bangalore, India, December 2000.\n[13] IBM Red Books, Introduction to Grid Computing with Globus, IBM Press, www.redbooks.ibm.com/redbooks/pdfs/sg246895.pdf [14] H. Stockinger, A. Samar, B. Allcock, I. Foster, K. Holtman, and B. Tierney, File and Object Replication in Data Grids, Journal of Cluster Computing, 5(3):305-314, 2002.\n[15] SYSSTAT utilities home page, http://perso.wanadoo.fr/sebastien.godard/ [16] The Globus Alliance, http://www.globus.org/ [17] S. Vazhkudai, Enabling the Co-Allocation of Grid Data Transfers, Proc.\nof Fourth International Workshop on Grid Computing, pp. 41-51, November 2003.\n[18] S. Vazhkudai and J. Schopf, Using Regression Techniques to Predict Large Data Transfers, International Journal of High Performance Computing Applications (IJHPCA), 17:249-268, August 2003.\n[19] S. Vazhkudai, S. Tuecke, and I. Foster, Replica Selection in the Globus Data Grid, Proc.\nof the 1st International Symposium on Cluster Computing and the Grid (CCGRID 2001), pp. 106-113, May 2001.\n[20] S. Vazhkudai, J. Schopf, Predicting Sporadic Grid Data Transfers, Proc.\nof 11th IEEE International Symposium on High Performance Distributed Computing (HPDC-11 `02), pp. 188-196, July 2002.\n[21] S. Vazhkudai, J. Schopf, and I. Foster, Predicting the Performance of Wide Area Data Transfers, Proc.\nof the 16th International Parallel and Distributed Processing Symposium (IPDPS 2002), pp.34-43, April 2002, pp. 34 - 43.\n[22] R. Wolski, N. Spring, and J. Hayes, The Network Weather Service: A Distributed Resource Performance Forecasting Service for Metacomputing, Future Generation Computer Systems, 15(5-6):757-768, 1999.\n[23] Chao-Tung Yang, Chun-Hsiang Chen, Kuan-Ching Li, and Ching-Hsien Hsu, Performance Analysis of Applying Replica Selection Technology for Data Grid Environments, PaCT 2005, Lecture Notes in Computer Science, vol.\n3603, pp. 278-287, Springer-Verlag, September 2005.\n[24] Chao-Tung Yang, I-Hsien Yang, Kuan-Ching Li, and ChingHsien Hsu A Recursive-Adjustment Co-Allocation Scheme in Data Grid Environments, ICA3PP 2005 Algorithm and Architecture for Parallel Processing, Lecture Notes in Computer Science, vol.\n3719, pp. 40-49, Springer-Verlag, October 2005.\n[25] X. Zhang, J. Freschl, and J. Schopf, A Performance Study of Monitoring and Information Services for Distributed Systems, Proc.\nof 12th IEEE International Symposium on High Performance Distributed Computing (HPDC-12 `03), pp. 270-282, August 2003.\n804", "lvl-3": "Implementation of a Dynamic Adjustment Mechanism with Efficient Replica Selection in Data Grid Environments\nABSTRACT\nThe co-allocation architecture was developed in order to enable parallel downloading of datasets from multiple servers .\nSeveral co-allocation strategies have been coupled and used to exploit rate differences among various client-server links and to address dynamic rate fluctuations by dividing files into multiple blocks of equal sizes .\nHowever , a major obstacle , the idle time of faster servers having to wait for the slowest server to deliver the final block , makes it important to reduce differences in finishing time among replica servers .\nIn this paper , we propose a dynamic coallocation scheme , namely Recursive-Adjustment Co-Allocation scheme , to improve the performance of data transfer in Data Grids .\nOur approach reduces the idle time spent waiting for the slowest server and decreases data transfer completion time .\nWe also provide an effective scheme for reducing the cost of reassembling data blocks .\n1 .\nINTRODUCTION\nData Grids aggregate distributed resources for solving large-size dataset management problems .\nMost Data Grid applications execute simultaneously and access large numbers of data files in the Grid environment .\nCertain data-intensive scientific applications , such as high-energy physics , bioinformatics\napplications and virtual astrophysical observatories , entail huge amounts of data that require data file management systems to replicate files and manage data transfers and distributed data access .\nThe data grid infrastructure integrates data storage devices and data management services into the grid environment , which consists of scattered computing and storage resources , perhaps located in different countries/regions yet accessible to users [ 12 ] .\nReplicating popular content in distributed servers is widely used in practice [ 14 , 17 , 19 ] .\nRecently , large-scale , data-sharing scientific communities such as those described in [ 1 , 5 ] used this technology to replicate their large datasets over several sites .\nDownloading large datasets from several replica locations may result in varied performance rates , because the replica sites may have different architectures , system loadings , and network connectivity .\nBandwidth quality is the most important factor affecting transfers between clients and servers since download speeds are limited by the bandwidth traffic congestion in the links connecting the servers to the clients .\nOne way to improve download speeds is to determine the best replica locations using replica selection techniques [ 19 ] .\nThis method selects the best servers to provide optimum transfer rates because bandwidth quality can vary unpredictably due to the sharing nature of the internet .\nAnother way is to use co-allocation technology [ 17 ] to download data .\nCo-allocation of data transfers enables the clients to download data from multiple locations by establishing multiple connections in parallel .\nThis can improve the performance compared to the single-server cases and alleviate the internet congestion problem [ 17 ] .\nSeveral co-allocation strategies were provided in previous work [ 17 ] .\nAn idle-time drawback remains since faster servers must wait for the slowest server to deliver its final block .\nTherefore , it is important to reduce the differences in finishing time among replica servers .\nIn this paper , we propose a dynamic co-allocation scheme based on co-allocation Grid data transfer architecture called RecursiveAdjustment Co-Allocation scheme that reduces the idle time spent waiting for the slowest server and improves data transfer performance [ 24 ] .\nExperimental results show that our approach is superior to previous methods and achieved the best overall performance .\nWe also discuss combination cost and provide an effective scheme for reducing it .\nThe remainder of this paper is organized as follows .\nRelated background review and studies are presented in Section 2 and the co-allocation architecture and related work are introduced in\nSection 3 .\nIn Section 4 , an efficient replica selection service is proposed by us .\nOur research approaches are outlined in Section 5 , and experimental results and a performance evaluation of our scheme are presented in Section 6 .\nSection 7 concludes this research paper .\n2 .\nBACKGROUND\n2.1 Data Grid\nThe Data Grids enable the sharing , selection , and connection of a wide variety of geographically distributed computational and storage resources for solving large-scale data intensive scientific applications ( e.g. , high energy physics , bioinformatics applications , and astrophysical virtual observatory ) .\nThe term `` Data Grid '' traditionally represents the network of distributed storage resources , from archival systems to caches and databases , which are linked using a logical name space to create global , persistent identifiers and provide uniform access mechanisms [ 4 ] .\nData Grids [ 1 , 2 , 16 ] federate a lot of storage resources .\nLarge collections of measured or computed data are emerging as important resources in many data intensive applications .\n2.1.1 Replica Management\nReplica management involves creating or removing replicas at a data grid site [ 19 ] .\nIn other words , the role of a replica manager is to create or delete replicas , within specified storage systems .\nMost often , these replicas are exact copies of the original files , created only to harness certain performance benefits .\nA replica manager typically maintains a replica catalog containing replica site addresses and the file instances .\nThe replica management service is responsible for managing the replication of complete and partial copies of datasets , defined as collections of files .\nThe replica management service is just one component in a Data Grid environment that provides support for high-performance , data-intensive applications .\nA replica or location is a subset of a collection that is stored on a particular physical storage system .\nThere may be multiple possibly overlapping subsets of a collection stored on multiple storage systems in a Data Grid .\nThese Grid storage systems may use a variety of underlying storage technologies and data movement protocols , which are independent of replica management .\n2.1.2 Replica Catalog\nAs mentioned above , the purpose of the replica catalog is to provide mappings between logical names for files or collections and one or more copies of the objects on physical storage systems .\nThe replica catalog includes optional entries that describe individual logical files .\nLogical files are entities with globally unique names that may have one or more physical instances .\nThe catalog may optionally contain one logical file entry in the replica catalog for each logical file in a collection .\nA Data Grid may contain multiple replica catalogs .\nFor example , a community of researchers interested in a particular research topic might maintain a replica catalog for a collection of data sets of mutual interest .\nIt is possible to create hierarchies of replica catalogs to impose a directory-like structure on related logical collections .\nIn addition , the replica manager can perform access control on entire catalogs as well as on individual logical files .\n2.1.3 Replica Selection\nThe purpose of replica selection [ 16 ] is to select a replica from among the sites which constitute a Data Grid [ 19 ] .\nThe criteria of selection depend on characteristics of the application .\nBy using this mechanism , users of the Data Grid can easily manage replicas of data sets at their sites , with better performance .\nMuch previous effort has been devoted to the replica selection problem .\nThe common process of replica selection consists of three steps : data preparation , preprocessing and prediction .\nThen , applications can select a replica according to its specific attributes .\nReplica selection is important to data-intensive applications , and it can provide location transparency .\nWhen a user requests for accessing a data set , the system determines an appropriate way to deliver the replica to the user .\n2.2 Globus Toolkit and GridFTP\nThe Globus Project [ 9 , 11 , 16 ] provides software tools collectively called The Globus Toolkit that makes it easier to build computational Grids and Grid-based applications .\nMany organizations use the Globus Toolkit to build computational Grids to support their applications .\nThe composition of the Globus Toolkit can be pictured as three pillars : Resource Management , Information Services , and Data Management .\nEach pillar represents a primary component of the Globus Toolkit and makes use of a common foundation of security .\nGRAM implements a resource management protocol , MDS implements an information services protocol , and GridFTP implements a data transfer protocol .\nThey all use the GSI security protocol at the connection layer [ 10 , 11 , 16 , 13 ] .\nThe Globus alliance proposed a common data transfer and access protocol called GridFTP that provides secure , efficient data movement in Grid environments [ 3 ] .\nThis protocol , which extends the standard FTP protocol , provides a superset of the features offered by the various Grid storage systems currently in use .\nIn order to solve the appearing problems , the Data Grid community tries to develop a secure , efficient data transport mechanism and replica management services .\nGridFTP is a reliable , secure and efficient data transport protocol which is developed as a part of the Globus project .\nThere is another key technology from Globus project , called replica catalog [ 16 ] which is used to register and manage complete and partial copies of data sets .\nThe replica catalog contains the mapping information from a logical file or collection to one or more physical files .\n2.3 Network Weather Service\nThe Network Weather Service ( NWS ) [ 22 ] is a generalized and distributed monitoring system for producing short-term performance forecasts based on historical performance measurements .\nThe goal of the system is to dynamically characterize and forecast the performance deliverable at the application level from a set of network and computational resources .\nA typical installation involves one nws_nameserver , one or more nws_memory ( which may reside on different machines ) , and an nws_sensor running on each machine with resources which are to be monitored .\nThe system includes sensors for end-to-end TCP/IP performance ( bandwidth and latency ) , available CPU percentage , and available non-paged memory .\n2.4 Sysstat Utilities\nThe Sysstat [ 15 ] utilities are a collection of performance monitoring tools for the Linux OS .\nThe Sysstat package incorporates the sar , mpstat , and iostat commands .\nThe sar command collects and reports system activity information , which can also be saved in a system activity file for future inspection .\nThe iostat command reports CPU statistics and I/O statistics for tty devices and disks .\nThe statistics reported by sar concern I/O transfer rates , paging activity , process-related activities , interrupts , network activity , memory and swap space utilization , CPU utilization , kernel activities , and tty statistics , among others .\nUniprocessor ( UP ) and Symmetric multiprocessor ( SMP ) machines are fully supported .\n3 .\nCO-ALLOCATION ARCHITECTURE AND RELATED WORK\n4 .\nAN EFFICIENT REPLICA SELECTION SERVICE\n4.1 Replica Selection Scenario\n4.2 System Factors\n4.3 Our Replica Selection Cost Model\n4.4 Co-Allocation Cost Analysis\n5 .\nDYNAMIC CO-ALLOCATION STRATEGY\n5.1 Recursive-Adjustment Co-Allocation\n5.2 Determining When to Stop Continuous Adjustment\n5.3 Reducing the Reassembly Overhead\n6 .\nEXPERIMENTAL RESULTS AND ANALYSIS\n7 .\nCONCLUSIONS\nThe co-allocation architecture provides a coordinated agent for assigning data blocks .\nA previous work showed that the dynamic co-allocation scheme leads to performance improvements .\nHowever , it can not handle the idle time of faster servers , which must wait for the slowest server to deliver its final block .\nWe proposed the Recursive-Adjustment Co-Allocation scheme to improve data transfer performances using the co-allocation architecture in [ 17 ] .\nIn this approach , the workloads of selected replica servers are continuously adjusted during data transfers , and we provide a function that enables users to define a final\nblock threshold , according to their data grid environment .\nExperimental results show the effectiveness of our proposed technique in improving transfer time and reducing overall idle time spent waiting for the slowest server .\nWe also discussed the re-combination cost and provided an effective scheme for reducing it .", "lvl-4": "Implementation of a Dynamic Adjustment Mechanism with Efficient Replica Selection in Data Grid Environments\nABSTRACT\nThe co-allocation architecture was developed in order to enable parallel downloading of datasets from multiple servers .\nSeveral co-allocation strategies have been coupled and used to exploit rate differences among various client-server links and to address dynamic rate fluctuations by dividing files into multiple blocks of equal sizes .\nHowever , a major obstacle , the idle time of faster servers having to wait for the slowest server to deliver the final block , makes it important to reduce differences in finishing time among replica servers .\nIn this paper , we propose a dynamic coallocation scheme , namely Recursive-Adjustment Co-Allocation scheme , to improve the performance of data transfer in Data Grids .\nOur approach reduces the idle time spent waiting for the slowest server and decreases data transfer completion time .\nWe also provide an effective scheme for reducing the cost of reassembling data blocks .\n1 .\nINTRODUCTION\nData Grids aggregate distributed resources for solving large-size dataset management problems .\nMost Data Grid applications execute simultaneously and access large numbers of data files in the Grid environment .\nCertain data-intensive scientific applications , such as high-energy physics , bioinformatics\napplications and virtual astrophysical observatories , entail huge amounts of data that require data file management systems to replicate files and manage data transfers and distributed data access .\nDownloading large datasets from several replica locations may result in varied performance rates , because the replica sites may have different architectures , system loadings , and network connectivity .\nOne way to improve download speeds is to determine the best replica locations using replica selection techniques [ 19 ] .\nThis method selects the best servers to provide optimum transfer rates because bandwidth quality can vary unpredictably due to the sharing nature of the internet .\nAnother way is to use co-allocation technology [ 17 ] to download data .\nCo-allocation of data transfers enables the clients to download data from multiple locations by establishing multiple connections in parallel .\nSeveral co-allocation strategies were provided in previous work [ 17 ] .\nAn idle-time drawback remains since faster servers must wait for the slowest server to deliver its final block .\nTherefore , it is important to reduce the differences in finishing time among replica servers .\nIn this paper , we propose a dynamic co-allocation scheme based on co-allocation Grid data transfer architecture called RecursiveAdjustment Co-Allocation scheme that reduces the idle time spent waiting for the slowest server and improves data transfer performance [ 24 ] .\nExperimental results show that our approach is superior to previous methods and achieved the best overall performance .\nWe also discuss combination cost and provide an effective scheme for reducing it .\nRelated background review and studies are presented in Section 2 and the co-allocation architecture and related work are introduced in\nSection 3 .\nIn Section 4 , an efficient replica selection service is proposed by us .\nOur research approaches are outlined in Section 5 , and experimental results and a performance evaluation of our scheme are presented in Section 6 .\nSection 7 concludes this research paper .\n2 .\nBACKGROUND\n2.1 Data Grid\nData Grids [ 1 , 2 , 16 ] federate a lot of storage resources .\nLarge collections of measured or computed data are emerging as important resources in many data intensive applications .\n2.1.1 Replica Management\nReplica management involves creating or removing replicas at a data grid site [ 19 ] .\nIn other words , the role of a replica manager is to create or delete replicas , within specified storage systems .\nMost often , these replicas are exact copies of the original files , created only to harness certain performance benefits .\nA replica manager typically maintains a replica catalog containing replica site addresses and the file instances .\nThe replica management service is responsible for managing the replication of complete and partial copies of datasets , defined as collections of files .\nThe replica management service is just one component in a Data Grid environment that provides support for high-performance , data-intensive applications .\nA replica or location is a subset of a collection that is stored on a particular physical storage system .\nThere may be multiple possibly overlapping subsets of a collection stored on multiple storage systems in a Data Grid .\nThese Grid storage systems may use a variety of underlying storage technologies and data movement protocols , which are independent of replica management .\n2.1.2 Replica Catalog\nAs mentioned above , the purpose of the replica catalog is to provide mappings between logical names for files or collections and one or more copies of the objects on physical storage systems .\nThe replica catalog includes optional entries that describe individual logical files .\nLogical files are entities with globally unique names that may have one or more physical instances .\nThe catalog may optionally contain one logical file entry in the replica catalog for each logical file in a collection .\nA Data Grid may contain multiple replica catalogs .\nFor example , a community of researchers interested in a particular research topic might maintain a replica catalog for a collection of data sets of mutual interest .\nIt is possible to create hierarchies of replica catalogs to impose a directory-like structure on related logical collections .\nIn addition , the replica manager can perform access control on entire catalogs as well as on individual logical files .\n2.1.3 Replica Selection\nThe purpose of replica selection [ 16 ] is to select a replica from among the sites which constitute a Data Grid [ 19 ] .\nThe criteria of selection depend on characteristics of the application .\nBy using this mechanism , users of the Data Grid can easily manage replicas of data sets at their sites , with better performance .\nMuch previous effort has been devoted to the replica selection problem .\nThe common process of replica selection consists of three steps : data preparation , preprocessing and prediction .\nThen , applications can select a replica according to its specific attributes .\nReplica selection is important to data-intensive applications , and it can provide location transparency .\nWhen a user requests for accessing a data set , the system determines an appropriate way to deliver the replica to the user .\n2.2 Globus Toolkit and GridFTP\nThe Globus Project [ 9 , 11 , 16 ] provides software tools collectively called The Globus Toolkit that makes it easier to build computational Grids and Grid-based applications .\nMany organizations use the Globus Toolkit to build computational Grids to support their applications .\nThe composition of the Globus Toolkit can be pictured as three pillars : Resource Management , Information Services , and Data Management .\nGRAM implements a resource management protocol , MDS implements an information services protocol , and GridFTP implements a data transfer protocol .\nThe Globus alliance proposed a common data transfer and access protocol called GridFTP that provides secure , efficient data movement in Grid environments [ 3 ] .\nThis protocol , which extends the standard FTP protocol , provides a superset of the features offered by the various Grid storage systems currently in use .\nIn order to solve the appearing problems , the Data Grid community tries to develop a secure , efficient data transport mechanism and replica management services .\nGridFTP is a reliable , secure and efficient data transport protocol which is developed as a part of the Globus project .\nThere is another key technology from Globus project , called replica catalog [ 16 ] which is used to register and manage complete and partial copies of data sets .\nThe replica catalog contains the mapping information from a logical file or collection to one or more physical files .\n2.3 Network Weather Service\nThe Network Weather Service ( NWS ) [ 22 ] is a generalized and distributed monitoring system for producing short-term performance forecasts based on historical performance measurements .\nThe goal of the system is to dynamically characterize and forecast the performance deliverable at the application level from a set of network and computational resources .\n2.4 Sysstat Utilities\nThe Sysstat [ 15 ] utilities are a collection of performance monitoring tools for the Linux OS .\nThe Sysstat package incorporates the sar , mpstat , and iostat commands .\nThe sar command collects and reports system activity information , which can also be saved in a system activity file for future inspection .\nThe iostat command reports CPU statistics and I/O statistics for tty devices and disks .\n7 .\nCONCLUSIONS\nThe co-allocation architecture provides a coordinated agent for assigning data blocks .\nA previous work showed that the dynamic co-allocation scheme leads to performance improvements .\nHowever , it can not handle the idle time of faster servers , which must wait for the slowest server to deliver its final block .\nWe proposed the Recursive-Adjustment Co-Allocation scheme to improve data transfer performances using the co-allocation architecture in [ 17 ] .\nIn this approach , the workloads of selected replica servers are continuously adjusted during data transfers , and we provide a function that enables users to define a final\nblock threshold , according to their data grid environment .\nExperimental results show the effectiveness of our proposed technique in improving transfer time and reducing overall idle time spent waiting for the slowest server .\nWe also discussed the re-combination cost and provided an effective scheme for reducing it .", "lvl-2": "Implementation of a Dynamic Adjustment Mechanism with Efficient Replica Selection in Data Grid Environments\nABSTRACT\nThe co-allocation architecture was developed in order to enable parallel downloading of datasets from multiple servers .\nSeveral co-allocation strategies have been coupled and used to exploit rate differences among various client-server links and to address dynamic rate fluctuations by dividing files into multiple blocks of equal sizes .\nHowever , a major obstacle , the idle time of faster servers having to wait for the slowest server to deliver the final block , makes it important to reduce differences in finishing time among replica servers .\nIn this paper , we propose a dynamic coallocation scheme , namely Recursive-Adjustment Co-Allocation scheme , to improve the performance of data transfer in Data Grids .\nOur approach reduces the idle time spent waiting for the slowest server and decreases data transfer completion time .\nWe also provide an effective scheme for reducing the cost of reassembling data blocks .\n1 .\nINTRODUCTION\nData Grids aggregate distributed resources for solving large-size dataset management problems .\nMost Data Grid applications execute simultaneously and access large numbers of data files in the Grid environment .\nCertain data-intensive scientific applications , such as high-energy physics , bioinformatics\napplications and virtual astrophysical observatories , entail huge amounts of data that require data file management systems to replicate files and manage data transfers and distributed data access .\nThe data grid infrastructure integrates data storage devices and data management services into the grid environment , which consists of scattered computing and storage resources , perhaps located in different countries/regions yet accessible to users [ 12 ] .\nReplicating popular content in distributed servers is widely used in practice [ 14 , 17 , 19 ] .\nRecently , large-scale , data-sharing scientific communities such as those described in [ 1 , 5 ] used this technology to replicate their large datasets over several sites .\nDownloading large datasets from several replica locations may result in varied performance rates , because the replica sites may have different architectures , system loadings , and network connectivity .\nBandwidth quality is the most important factor affecting transfers between clients and servers since download speeds are limited by the bandwidth traffic congestion in the links connecting the servers to the clients .\nOne way to improve download speeds is to determine the best replica locations using replica selection techniques [ 19 ] .\nThis method selects the best servers to provide optimum transfer rates because bandwidth quality can vary unpredictably due to the sharing nature of the internet .\nAnother way is to use co-allocation technology [ 17 ] to download data .\nCo-allocation of data transfers enables the clients to download data from multiple locations by establishing multiple connections in parallel .\nThis can improve the performance compared to the single-server cases and alleviate the internet congestion problem [ 17 ] .\nSeveral co-allocation strategies were provided in previous work [ 17 ] .\nAn idle-time drawback remains since faster servers must wait for the slowest server to deliver its final block .\nTherefore , it is important to reduce the differences in finishing time among replica servers .\nIn this paper , we propose a dynamic co-allocation scheme based on co-allocation Grid data transfer architecture called RecursiveAdjustment Co-Allocation scheme that reduces the idle time spent waiting for the slowest server and improves data transfer performance [ 24 ] .\nExperimental results show that our approach is superior to previous methods and achieved the best overall performance .\nWe also discuss combination cost and provide an effective scheme for reducing it .\nThe remainder of this paper is organized as follows .\nRelated background review and studies are presented in Section 2 and the co-allocation architecture and related work are introduced in\nSection 3 .\nIn Section 4 , an efficient replica selection service is proposed by us .\nOur research approaches are outlined in Section 5 , and experimental results and a performance evaluation of our scheme are presented in Section 6 .\nSection 7 concludes this research paper .\n2 .\nBACKGROUND\n2.1 Data Grid\nThe Data Grids enable the sharing , selection , and connection of a wide variety of geographically distributed computational and storage resources for solving large-scale data intensive scientific applications ( e.g. , high energy physics , bioinformatics applications , and astrophysical virtual observatory ) .\nThe term `` Data Grid '' traditionally represents the network of distributed storage resources , from archival systems to caches and databases , which are linked using a logical name space to create global , persistent identifiers and provide uniform access mechanisms [ 4 ] .\nData Grids [ 1 , 2 , 16 ] federate a lot of storage resources .\nLarge collections of measured or computed data are emerging as important resources in many data intensive applications .\n2.1.1 Replica Management\nReplica management involves creating or removing replicas at a data grid site [ 19 ] .\nIn other words , the role of a replica manager is to create or delete replicas , within specified storage systems .\nMost often , these replicas are exact copies of the original files , created only to harness certain performance benefits .\nA replica manager typically maintains a replica catalog containing replica site addresses and the file instances .\nThe replica management service is responsible for managing the replication of complete and partial copies of datasets , defined as collections of files .\nThe replica management service is just one component in a Data Grid environment that provides support for high-performance , data-intensive applications .\nA replica or location is a subset of a collection that is stored on a particular physical storage system .\nThere may be multiple possibly overlapping subsets of a collection stored on multiple storage systems in a Data Grid .\nThese Grid storage systems may use a variety of underlying storage technologies and data movement protocols , which are independent of replica management .\n2.1.2 Replica Catalog\nAs mentioned above , the purpose of the replica catalog is to provide mappings between logical names for files or collections and one or more copies of the objects on physical storage systems .\nThe replica catalog includes optional entries that describe individual logical files .\nLogical files are entities with globally unique names that may have one or more physical instances .\nThe catalog may optionally contain one logical file entry in the replica catalog for each logical file in a collection .\nA Data Grid may contain multiple replica catalogs .\nFor example , a community of researchers interested in a particular research topic might maintain a replica catalog for a collection of data sets of mutual interest .\nIt is possible to create hierarchies of replica catalogs to impose a directory-like structure on related logical collections .\nIn addition , the replica manager can perform access control on entire catalogs as well as on individual logical files .\n2.1.3 Replica Selection\nThe purpose of replica selection [ 16 ] is to select a replica from among the sites which constitute a Data Grid [ 19 ] .\nThe criteria of selection depend on characteristics of the application .\nBy using this mechanism , users of the Data Grid can easily manage replicas of data sets at their sites , with better performance .\nMuch previous effort has been devoted to the replica selection problem .\nThe common process of replica selection consists of three steps : data preparation , preprocessing and prediction .\nThen , applications can select a replica according to its specific attributes .\nReplica selection is important to data-intensive applications , and it can provide location transparency .\nWhen a user requests for accessing a data set , the system determines an appropriate way to deliver the replica to the user .\n2.2 Globus Toolkit and GridFTP\nThe Globus Project [ 9 , 11 , 16 ] provides software tools collectively called The Globus Toolkit that makes it easier to build computational Grids and Grid-based applications .\nMany organizations use the Globus Toolkit to build computational Grids to support their applications .\nThe composition of the Globus Toolkit can be pictured as three pillars : Resource Management , Information Services , and Data Management .\nEach pillar represents a primary component of the Globus Toolkit and makes use of a common foundation of security .\nGRAM implements a resource management protocol , MDS implements an information services protocol , and GridFTP implements a data transfer protocol .\nThey all use the GSI security protocol at the connection layer [ 10 , 11 , 16 , 13 ] .\nThe Globus alliance proposed a common data transfer and access protocol called GridFTP that provides secure , efficient data movement in Grid environments [ 3 ] .\nThis protocol , which extends the standard FTP protocol , provides a superset of the features offered by the various Grid storage systems currently in use .\nIn order to solve the appearing problems , the Data Grid community tries to develop a secure , efficient data transport mechanism and replica management services .\nGridFTP is a reliable , secure and efficient data transport protocol which is developed as a part of the Globus project .\nThere is another key technology from Globus project , called replica catalog [ 16 ] which is used to register and manage complete and partial copies of data sets .\nThe replica catalog contains the mapping information from a logical file or collection to one or more physical files .\n2.3 Network Weather Service\nThe Network Weather Service ( NWS ) [ 22 ] is a generalized and distributed monitoring system for producing short-term performance forecasts based on historical performance measurements .\nThe goal of the system is to dynamically characterize and forecast the performance deliverable at the application level from a set of network and computational resources .\nA typical installation involves one nws_nameserver , one or more nws_memory ( which may reside on different machines ) , and an nws_sensor running on each machine with resources which are to be monitored .\nThe system includes sensors for end-to-end TCP/IP performance ( bandwidth and latency ) , available CPU percentage , and available non-paged memory .\n2.4 Sysstat Utilities\nThe Sysstat [ 15 ] utilities are a collection of performance monitoring tools for the Linux OS .\nThe Sysstat package incorporates the sar , mpstat , and iostat commands .\nThe sar command collects and reports system activity information , which can also be saved in a system activity file for future inspection .\nThe iostat command reports CPU statistics and I/O statistics for tty devices and disks .\nThe statistics reported by sar concern I/O transfer rates , paging activity , process-related activities , interrupts , network activity , memory and swap space utilization , CPU utilization , kernel activities , and tty statistics , among others .\nUniprocessor ( UP ) and Symmetric multiprocessor ( SMP ) machines are fully supported .\n3 .\nCO-ALLOCATION ARCHITECTURE AND RELATED WORK\nThe co-allocation architecture proposed in [ 17 ] consists of three main components : an information service , a broker/co-allocator , and local storage systems .\nFigure 1 shows the co-allocation of Grid Data transfers , which is an extension of the basic template for resource management [ 7 ] provided by Globus Toolkit .\nApplications specify the characteristics of desired data and pass the attribute description to a broker .\nThe broker queries available resources and gets replica locations from information services [ 6 ] and replica management services [ 19 ] , and then gets a list of physical locations for the desired files .\nFigure 1 .\nData Grid Co-Allocation Architecture [ 17 ]\nThe candidate replica locations are passed to a replica selection service [ 19 ] , which was presented in a previous work [ 23 ] .\nThis replica selection service provides estimates of candidate transfer performance based on a cost model and chooses appropriate amounts to request from the better locations .\nThe co-allocation agent then downloads the data in parallel from the selected servers .\nIn these researches , GridFTP [ 1 , 11 , 16 ] was used to enable parallel data transfers .\nGridFTP is a high-performance , secure , reliable data transfer protocol optimized for high-bandwidth widearea networks .\nAmong its many features are security , parallel streams , partial file transfers , third-party transfers , and reusable data channels .\nIts partial file transfer ability allows files to be retrieved from data servers by specifying the start and end offsets of file sections .\nData grids consist of scattered computing and storage resources located in different countries/regions yet accessible to users [ 8 ] .\nIn this study we used the grid middleware Globus Toolkit [ 16 ] as the data grid infrastructure .\nThe Globus Toolkit provides solutions for such considerations as security , resource management , data management , and information services .\nOne of its primary components is MDS [ 6 , 11 , 16 , 25 ] , which is designed to provide a standard mechanism for discovering and publishing resource status and configuration information .\nIt provides a uniform and flexible interface for data collected by lower-level information providers in two modes : static ( e.g. , OS , CPU types , and system architectures ) and dynamic data ( e.g. , disk availability , memory availability , and loading ) .\nAnd it uses GridFTP [ 1 , 11 , 16 ] , a reliable , secure , and efficient data transport protocol to provide efficient management and transfer of terabytes or petabytes of data in a wide-area , distributed-resource environment .\nAs datasets are replicated within Grid environments for reliability and performance , clients require the abilities to discover existing data replicas , and create and register new replicas .\nA Replica Location Service ( RLS ) [ 4 ] provides a mechanism for discovering and registering existing replicas .\nSeveral prediction metrics have been developed to help replica selection .\nFor instance , Vazhkudai and Schopf [ 18 , 20 , 21 ] used past data transfer histories to estimate current data transfer throughputs .\nIn our previous work [ 23 , 24 ] , we proposed a replica selection cost model and a replica selection service to perform replica selection .\nIn [ 17 ] , the author proposes co-allocation architecture for co-allocating Grid data transfers across multiple connections by exploiting the partial copy feature of GridFTP .\nIt also provides Brute-Force , History-Base , and Dynamic Load Balancing for allocating data block .\n\u2022 Brute-Force Co-Allocation : Brute-Force Co-Allocation works by dividing the file size equally among available flows .\nIt does not address the bandwidth differences among the various client-server links .\n\u2022 History-based Co-Allocation : The History-based CoAllocation scheme keeps block sizes per flow proportional to predicted transfer rates .\n\u2022 Conservative Load Balancing : One of their dynamic coallocation is Conservative Load Balancing .\nThe Conservative Load Balancing dynamic co-allocation strategy divides requested datasets into `` k '' disjoint blocks of equal size .\nAvailable servers are assigned single blocks to deliver in parallel .\nWhen a server finishes delivering a block , another is requested , and so on , till the entire file is downloaded .\nThe loadings on the co-allocated flows are automatically adjusted because the faster servers will deliver more quickly providing larger portions of the file .\n\u2022 Aggressive Load Balancing : Another dynamic coallocation strategy , presented in [ 17 ] , is the Aggressive Load Balancing .\nThe Aggressive Load Balancing dynamic co-allocation strategy presented in [ 17 ] adds functions that change block size de-liveries by : ( 1 ) progressively increasing the amounts of data requested from faster servers , and ( 2 ) reducing the amounts of data requested from slower servers or ceasing to request data from them altogether .\nThe co-allocation strategies described above do not handle the shortcoming of faster servers having to wait for the slowest server to deliver its final block .\nIn most cases , this wastes much time and decreases overall performance .\nThus , we propose an efficient approach called Recursive-Adjustment Co-Allocation and based\non a co-allocation architecture .\nIt improves dynamic co-allocation and reduces waiting time , thus improving overall transfer performance .\n4 .\nAN EFFICIENT REPLICA SELECTION SERVICE\nWe constructed a replica selection service to enable clients to select the better replica servers in Data Grid environments .\nSee below for a detailed description .\n4.1 Replica Selection Scenario\nOur proposed replica selection model is illustrated in [ 23 ] , which shows how a client identifies the best location for a desired replica transfer .\nThe client first logins in at a local site and executes the Data Grid platform application , which checks to see if the files are available at the local site .\nIf they are present at the local site , the application accesses them immediately ; otherwise , it passes the logical file names to the replica catalog server , which returns a list of physical locations for all registered copies .\nThe application passes this list of replica locations to a replica selection server , which identifies the storage system destination locations for all candidate data transfer operations .\nThe replica selection server sends the possible destination locations to the information server , which provides performance measurements and predictions of the three system factors described below .\nThe replica selection server chooses better replica locations according to these estimates and returns location information to the transfer application , which receives the replica through GridFTP .\nWhen the application finishes , it returns the results to the user .\n4.2 System Factors\nDetermining the best database from many with the same replications is a significant problem .\nIn our model , we consider three system factors that affect replica selection :\n\u2022 Network bandwidth : This is one of the most significant Data Grid factors since data files in Data Grid environments are usually very large .\nIn other words , data file transfer times are tightly dependent on network bandwidth situations .\nBecause network bandwidth is an unstable dynamic factor , we must measure it frequently and predict it as accurately as possible .\nThe Network Weather Service ( NWS ) is a powerful toolkit for this purpose .\n\u2022 CPU load : Grid platforms consist of numbers of heterogeneous systems , built with different system architectures , e.g. , cluster platforms , supercomputers , PCs .\nCPU loading is a dynamic system factor , and a heavy system CPU load will certainly affect data file downloads process from the site .\nThe measurement of it is done by the Globus Toolkit / MDS .\n\u2022 I/O state : Data Grid nodes consist of different heterogeneous storage systems .\nData files in Data Grids are huge .\nIf the I/O state of a site that we wish to download files from is very busy , it will directly affect data transfer performance .\nWe measure I/O states using sysstat [ 15 ] utilities .\n4.3 Our Replica Selection Cost Model\nThe target function of a cost model for distributed and replicated data storage is the information score from the information service .\nWe listed some influencing factors for our cost model in the preceding section .\nHowever , we must express these factors in mathematical notation for further analysis .\nWe assume node i is the local site the user or application logs in on , and node j possesses the replica the user or application wants .\nThe seven system parameters our replica selection cost model considers are :\n\u2022 Scorei-j : the score value represents how efficiently a user or application at node i can acquire a replica from node j : percentage of bandwidth available from node i to node j ; current bandwidth divided by highest theoretical bandwidth \u2022 BBW : network bandwidth weight defined by the Data Grid administrator \u2022 PjCPU : percentage of node j CPU idle states \u2022 WCPU : CPU load weight defined by the Data Grid administrator : percentage of node j I/O idle states \u2022 WI/O : I/O state weight defined by the Data Grid administrator\nWe define the following general formula using these system factors .\nThe three influencing factors in this formula : WBW , WCPU , and WI/O describe CPU , I/O , and network bandwidth weights , which can be determined by Data Grid organization administrators according to the various attributes of the storage systems in Data Grid nodes since some storage equipment does not affect CPU loading .\nAfter several experimental measurements , we determined that network bandwidth is the most significant factor directly influencing data transfer times .\nWhen we performed data transfers using the GridFTP protocol we discovered that CPU and I/O statuses slightly affect data transfer performance .\nTheir respective values in our Data Grid environment are 80 % , 10 % , and 10 % .\n4.4 Co-Allocation Cost Analysis\nWhen clients download datasets using GridFTP co-allocation technology , three time costs are incurred : the time required for client authentication to the GridFTP server , actual data transmission time , and data block reassembly time .\n\u2022 Authentication Time : Before a transfer , the client must load a Globus proxy and authenticate itself to the GridFTP server with specified user credentials .\nThe client then establishes a control channel , sets up transfer parameters , and requests data channel creation .\nWhen the channel has been established , the data begins flowing .\n\u2022 Transmission Time : Transmission time is measured from the time when the client starts transferring to the time when all transmission jobs are finished , and it includes the time\nrequired for resetting data channels between transfer requests .\nData pathways need be opened only once and may handle many transfers before being closed .\nThis allows the same data pathways to be used for multiple file transfers .\nHowever , data channels must be explicitly reset between transfer requests .\nThis is less time-costly .\nz Combination Time : Co-allocation architecture exploits the partial copy feature of the GridFTP data movement tool to enable data transfers across multiple connections .\nWith partial file transfer , file sections can be retrieved from data servers by specifying only the section start and end offsets .\nWhen these file sections are delivered , they may need to be reassembled ; the reassembly operation incurs an additional time cost .\n5 .\nDYNAMIC CO-ALLOCATION STRATEGY\nDynamic co-allocation , described above , is the most efficient approach to reducing the influence of network variations between clients and servers .\nHowever , the idle time of faster servers awaiting the slowest server to deliver the last block is still a major factor affecting overall efficiency , which Conservative Load Balancing and Aggressive Load Balancing [ 17 ] can not effectively avoid .\nThe approach proposed in the present paper , a dynamic allocation mechanism called `` Recursive-Adjustment CoAllocation '' can overcome this , and thus , improve data transfer performance .\n5.1 Recursive-Adjustment Co-Allocation\nRecursive-Adjustment Co-Allocation works by continuously adjusting each replica server 's workload to correspond to its realtime bandwidth during file transfers .\nThe goal is to make the expected finish time of all servers the same .\nAs Figure 2 shows , when an appropriate file section is first selected , it is divided into proper block sizes according to the respective server bandwidths .\nThe co-allocator then assigns the blocks to servers for transfer .\nAt this moment , it is expected that the transfer finish time will be consistent at E ( T1 ) .\nHowever , since server bandwidths may fluctuate during segment deliveries , actual completion time may be dissimilar ( solid line , in Figure 2 ) .\nOnce the quickest server finishes its work at time T1 , the next section is assigned to the servers again .\nThis allows each server to finish its assigned workload by the expected time at E ( T2 ) .\nThese adjustments are repeated until the entire file transfer is finished .\nFigure 2 .\nThe adjustment process The Recursive-Adjustment Co-Allocation process is illustrated in Figure 3 .\nWhen a user requests file A , the replica selection service responds with the subset of all available servers defined by the maximum performance matrix .\nThe co-allocation service gets this\nlist of selected replica servers .\nAssuming n replica servers are selected , Si denotes server `` i '' such that 1 : < isn .\nA connection for file downloading is then built to each server .\nThe RecursiveAdjustment Co-Allocation process is as follows .\nA new section of a file to be allocated is first defined .\nThe section size , `` SEj '' , is :\nwhere SEj denotes the section j such that 1Sjsk , assuming we allocate k times for the download process .\nAnd thus , there are k sections , while Tj denotes the time section j allocated .\nUnassignedFileSize is the portion of file A not yet distributed for downloading ; initially , UnassignedFileSize is equal to the total size of file A. .\nis the rate that determines how much of the section remains to be assigned .\nFigure 3 .\nThe Recursive-Adjustment Co-Allocation process .\nIn the next step , SEj is divided into several blocks and assigned to `` n '' servers .\nEach server has a real-time transfer rate to the client of Bi , which is measured by the Network Weather Service ( NWS ) [ 18 ] .\nThe block size per flow from SEj for each server `` i '' at time Tj is :\nwhere UnFinishSizei denotes the size of unfinished transfer blocks that is assigned in previous rounds at server `` i '' .\nUnFinishSizei is equal to zero in first round .\nIdeally , depending to the real time bandwidth at time Tj , every flow is expected to finish its workload in future .\nThis fulfills our requirement to minimize the time faster servers must wait for the slowest server to finish .\nIf , in some cases , network variations greatly degrade transfer rates , UnFinishSizei\ntotal block size expected to be transferred after Tj .\nIn such cases , the co-allocator eliminates the servers in advance and assigns SEj to other servers .\nAfter allocation , all channels continue transferring data blocks .\nWhen a faster channel finishes its assigned data blocks , the co-allocator begins allocating an unassigned section of file A again .\nThe process of allocating data ... Bi , which is the\nblocks to adjust expected flow finish time continues until the entire file has been allocated .\n5.2 Determining When to Stop Continuous Adjustment\nOur approach gets new sections from whole files by dividing unassigned file ranges in each round of allocation .\nThese unassigned portions of the file ranges become smaller after each allocation .\nSince adjustment is continuous , it would run as an endless loop if not limited by a stop condition .\nHowever , when is it appropriate to stop continuous adjustment ?\nWe provide two monitoring criteria , LeastSize and ExpectFinishedTime , to enable users to define stop thresholds .\nWhen a threshold is reached , the co-allocation server stopped dividing the remainder of the file and assigns that remainder as the final section .\nThe LeastSize criterion specifies the smallest file we want to process , and when the unassigned portion of UnassignedFileSize drops below the LeastSize specification , division stops .\nExpectFinishedTime criterion specifies the remaining time transfer is expected to take .\nWhen the expected transfer time of the unassigned portion of a file drops below the time specified by ExpectFinishedTime , file division stops .\nThe expected rest time value is determined by :\nThese two criteria determine the final section size allocated .\nHigher threshold values will induce fewer divisions and yield lower co-allocation costs , which include establishing connections , negotiation , reassembly , etc. .\nHowever , although the total coallocation adjustment time may be lower , bandwidth variations may also exert more influence .\nBy contrast , lower threshold values will induce more frequent dynamic server workload adjustments and , in the case of greater network fluctuations , result in fewer differences in server transfer finish time .\nHowever , lower values will also increase co-allocation times , and hence , increase co-allocation costs .\nTherefore , the internet environment , transferred file sizes , and co-allocation costs should all be considered in determining optimum thresholds .\n5.3 Reducing the Reassembly Overhead\nThe process of reassembling blocks after data transfers using coallocation technology results in additional overhead and decreases overall performance .\nThe reassembly overhead is related to total block size , and could be reduced by upgrading hardware capabilities or using better software algorithms .\nWe propose an efficient alternative reassembly mechanism to reduce the added combination overhead after all block transmissions are finished .\nIt differs from the conventional method in which the software starts assembly after all blocks have been delivered by starting to assemble blocks once the first deliveries finish .\nOf course , this makes it necessary to maintain the original splitting order .\nCo-allocation strategies such as Conservative Load Balancing and Recursive-Adjustment Co-Allocation produce additional blocks during file transfers and can benefit from enabling reassembly during data transfers .\nIf some blocks are assembled in advance , the time cost for assembling the blocks remaining after all transfers finish can be reduced .\n6 .\nEXPERIMENTAL RESULTS AND ANALYSIS\nIn this section , we discuss the performance of our RecursiveAdjustment Co-Allocation strategy .\nWe evaluate four coallocation schemes : ( 1 ) Brute-Force ( Brute ) , ( 2 ) History-based ( History ) , ( 3 ) Conservative Load Balancing ( Conservative ) and ( 4 ) Recursive-Adjustment Co-Allocation ( Recursive ) .\nWe analyze the performance of each scheme by comparing their transfer finish time , and the total idle time faster servers spent waiting for the slowest server to finish delivering the last block .\nWe also analyze the overall performances in the various cases .\nWe performed wide-area data transfer experiments using our GridFTP GUI client tool .\nWe executed our co-allocation client tool on our testbed at Tunghai University ( THU ) , Taichung City , Taiwan , and fetched files from four selected replica servers : one at Providence University ( PU ) , one at Li-Zen High School ( LZ ) , one at Hsiuping Institute of Technology School ( HIT ) , and one at Da-Li High School ( DL ) .\nAll these institutions are in Taiwan , and each is at least 10 Km from THU. .\nFigure 4 shows our Data Grid testbed .\nOur servers have Globus 3.0.2 or above installed .\nFigure 4 .\nOur Data Grid testbed\nIn the following experiments , we set .\n= 0.5 , the LeastSize threshold to 10MB , and experimented with file sizes of 10 MB , 50MB , 100MB , 500MB , 1000MB , 2000MB , and 4000MB .\nFor comparison , we measured the performance of Conservative Load Balancing on each size using the same block numbers .\nFigure 5 shows a snapshot of our GridFTP client tool .\nThis client tool is developed by using Java CoG .\nIt allows easier and more rapid application development by encouraging collaborative code reuse and avoiding duplication of effort among problem-solving environments , science portals , Grid middleware , and collaborative pilots .\nTable 1 shows average transmission rates between THU and each replica server .\nThese numbers were obtained by transferring files of 500MB , 1000MB , and 2000MB from a single replica server using our GridFTP client tool , and each number is an average over several runs .\nTable 1 .\nGridFTP end-to-end transmission rate from THU to various servers\nFigure 5 .\nOur GridFTP client tool\nWe analyzed the effect of faster servers waiting for the slowest server to deliver the last block for each scheme .\nFigure 6 ( a ) shows total idle time for various file sizes .\nNote that our RecursiveAdjustment Co-Allocation scheme achieved significant performance improvements over other schemes for every file size .\nThese results demonstrate that our approach efficiently reduces the differences in servers finish times .\nThe experimental results shown in Figure 6 ( b ) indicate that our scheme beginning block reassembly as soon as the first blocks have been completely delivered reduces combination time , thus aiding co-allocation strategies like Conservative Load Balancing and RecursiveAdjustment Co-Allocation that produce more blocks during data transfers .\nFigure 7 shows total completion time experimental results in a detailed cost structure view .\nServers were at PU , DL , and HIT , with the client at THU. .\nThe first three bars for each file size denote the time to download the entire file from single server , while the other bars show co-allocated downloads using all three servers .\nOur co-allocation scheme finished the job faster than the other co-allocation strategies .\nThus , we may infer that the main gains our technology offers are lower transmission and combination times than other co-allocation strategies .\nFigure 6 .\n( a ) Idle times for various methods ; servers are at PU , DL , and HIT .\n( b ) Combination times for various methods ; servers are at PU , DL , and HIT .\nIn the next experiment , we used the Recursive-Adjustment CoAllocation strategy with various sets of replica servers and measured overall performances , where overall performance is : Total Performance = File size/Total Completion Time ( 5 ) Table 2 lists all experiments we performed and the sets of replica servers used .\nThe results in Figure 8 ( a ) show that using coallocation technologies yielded no improvement for smaller file sizes such as 10MB .\nThey also show that in most cases , overall performance increased as the number of co-allocated flows increased .\nWe observed that for our testbed and our co-allocation technology , overall performance reached its highest value in the REC3_2 case .\nHowever , in the REC4 case , when we added one flow to the set of replica servers , the performance did not increase .\nOn the contrary , it decreased .\nWe can infer that the co-allocation efficiency reached saturation in the REC3_2 case , and that additional flows caused additional overhead and reduced overall performance .\nThis means that more download flows do not necessarily result in higher performance .\nWe must choose appropriate numbers of flows to achieve optimum performance .\nWe show the detailed cost structure view for the case of REC3_2 and the case of REC4 in Figure 8 ( b ) .\nThe detailed cost consists of authentication time , transfer time and combination time .\nFigure 7 .\nCompletion times for various methods ; servers are at PU , DL , and HIT .\nTable 2 .\nThe sets of replica servers for all cases\nFigure 8 .\n( a ) Overall performances for various sets of servers .\n( b ) Detailed cost structure view for the case of REC3_2 and the case of REC4 .\n7 .\nCONCLUSIONS\nThe co-allocation architecture provides a coordinated agent for assigning data blocks .\nA previous work showed that the dynamic co-allocation scheme leads to performance improvements .\nHowever , it can not handle the idle time of faster servers , which must wait for the slowest server to deliver its final block .\nWe proposed the Recursive-Adjustment Co-Allocation scheme to improve data transfer performances using the co-allocation architecture in [ 17 ] .\nIn this approach , the workloads of selected replica servers are continuously adjusted during data transfers , and we provide a function that enables users to define a final\nblock threshold , according to their data grid environment .\nExperimental results show the effectiveness of our proposed technique in improving transfer time and reducing overall idle time spent waiting for the slowest server .\nWe also discussed the re-combination cost and provided an effective scheme for reducing it ."} {"id": "H-16", "title": "", "abstract": "", "keyphrases": ["effici cach system", "web search engin", "static cach", "dynam cach", "cach queri result", "cach post list", "static cach", "answer and post list", "queri log", "static cach effect", "the queri distribut", "data-access hierarchi", "disk layer", "remot server layer", "cach", "web search", "inform retriev system"], "prmu": [], "lvl-1": "The Impact of Caching on Search Engines Ricardo Baeza-Yates1 rbaeza@acm.org Aristides Gionis1 gionis@yahoo-inc.com Flavio Junqueira1 fpj@yahoo-inc.com Vanessa Murdock1 vmurdock@yahoo-inc.com Vassilis Plachouras1 vassilis@yahoo-inc.com Fabrizio Silvestri2 f.silvestri@isti.cnr.it 1 Yahoo! Research Barcelona 2 ISTI - CNR Barcelona, SPAIN Pisa, ITALY ABSTRACT In this paper we study the trade-offs in designing efficient caching systems for Web search engines.\nWe explore the impact of different approaches, such as static vs. dynamic caching, and caching query results vs. caching posting lists.\nUsing a query log spanning a whole year we explore the limitations of caching and we demonstrate that caching posting lists can achieve higher hit rates than caching query answers.\nWe propose a new algorithm for static caching of posting lists, which outperforms previous methods.\nWe also study the problem of finding the optimal way to split the static cache between answers and posting lists.\nFinally, we measure how the changes in the query log affect the effectiveness of static caching, given our observation that the distribution of the queries changes slowly over time.\nOur results and observations are applicable to different levels of the data-access hierarchy, for instance, for a memory/disk layer or a broker/remote server layer.\nCategories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval - Search process; H.3.4 [Information Storage and Retrieval]: Systems and Software - Distributed systems, Performance evaluation (efficiency and effectiveness) General Terms Algorithms, Experimentation 1.\nINTRODUCTION Millions of queries are submitted daily to Web search engines, and users have high expectations of the quality and speed of the answers.\nAs the searchable Web becomes larger and larger, with more than 20 billion pages to index, evaluating a single query requires processing large amounts of data.\nIn such a setting, to achieve a fast response time and to increase the query throughput, using a cache is crucial.\nThe primary use of a cache memory is to speedup computation by exploiting frequently or recently used data, although reducing the workload to back-end servers is also a major goal.\nCaching can be applied at different levels with increasing response latencies or processing requirements.\nFor example, the different levels may correspond to the main memory, the disk, or resources in a local or a wide area network.\nThe decision of what to cache is either off-line (static) or online (dynamic).\nA static cache is based on historical information and is periodically updated.\nA dynamic cache replaces entries according to the sequence of requests.\nWhen a new request arrives, the cache system decides whether to evict some entry from the cache in the case of a cache miss.\nSuch online decisions are based on a cache policy, and several different policies have been studied in the past.\nFor a search engine, there are two possible ways to use a cache memory: Caching answers: As the engine returns answers to a particular query, it may decide to store these answers to resolve future queries.\nCaching terms: As the engine evaluates a particular query, it may decide to store in memory the posting lists of the involved query terms.\nOften the whole set of posting lists does not fit in memory, and consequently, the engine has to select a small set to keep in memory and speed up query processing.\nReturning an answer to a query that already exists in the cache is more efficient than computing the answer using cached posting lists.\nOn the other hand, previously unseen queries occur more often than previously unseen terms, implying a higher miss rate for cached answers.\nCaching of posting lists has additional challenges.\nAs posting lists have variable size, caching them dynamically is not very efficient, due to the complexity in terms of efficiency and space, and the skewed distribution of the query stream, as shown later.\nStatic caching of posting lists poses even more challenges: when deciding which terms to cache one faces the trade-off between frequently queried terms and terms with small posting lists that are space efficient.\nFinally, before deciding to adopt a static caching policy the query stream should be analyzed to verify that its characteristics do not change rapidly over time.\nBroker Static caching posting lists Dynamic/Static cached answers Local query processor Disk Next caching level Local network access Remote network access Figure 1: One caching level in a distributed search architecture.\nIn this paper we explore the trade-offs in the design of each cache level, showing that the problem is the same and only a few parameters change.\nIn general, we assume that each level of caching in a distributed search architecture is similar to that shown in Figure 1.\nWe use a query log spanning a whole year to explore the limitations of dynamically caching query answers or posting lists for query terms.\nMore concretely, our main conclusions are that: \u2022 Caching query answers results in lower hit ratios compared to caching of posting lists for query terms, but it is faster because there is no need for query evaluation.\nWe provide a framework for the analysis of the trade-off between static caching of query answers and posting lists; \u2022 Static caching of terms can be more effective than dynamic caching with, for example, LRU.\nWe provide algorithms based on the Knapsack problem for selecting the posting lists to put in a static cache, and we show improvements over previous work, achieving a hit ratio over 90%; \u2022 Changes of the query distribution over time have little impact on static caching.\nThe remainder of this paper is organized as follows.\nSections 2 and 3 summarize related work and characterize the data sets we use.\nSection 4 discusses the limitations of dynamic caching.\nSections 5 and 6 introduce algorithms for caching posting lists, and a theoretical framework for the analysis of static caching, respectively.\nSection 7 discusses the impact of changes in the query distribution on static caching, and Section 8 provides concluding remarks.\n2.\nRELATED WORK There is a large body of work devoted to query optimization.\nBuckley and Lewit [3], in one of the earliest works, take a term-at-a-time approach to deciding when inverted lists need not be further examined.\nMore recent examples demonstrate that the top k documents for a query can be returned without the need for evaluating the complete set of posting lists [1, 4, 15].\nAlthough these approaches seek to improve query processing efficiency, they differ from our current work in that they do not consider caching.\nThey may be considered separate and complementary to a cache-based approach.\nRaghavan and Sever [12], in one of the first papers on exploiting user query history, propose using a query base, built upon a set of persistent optimal queries submitted in the past, to improve the retrieval effectiveness for similar future queries.\nMarkatos [10] shows the existence of temporal locality in queries, and compares the performance of different caching policies.\nBased on the observations of Markatos, Lempel and Moran propose a new caching policy, called Probabilistic Driven Caching, by attempting to estimate the probability distribution of all possible queries submitted to a search engine [8].\nFagni et al. follow Markatos'' work by showing that combining static and dynamic caching policies together with an adaptive prefetching policy achieves a high hit ratio [7].\nDifferent from our work, they consider caching and prefetching of pages of results.\nAs systems are often hierarchical, there has also been some effort on multi-level architectures.\nSaraiva et al. propose a new architecture for Web search engines using a two-level dynamic caching system [13].\nTheir goal for such systems has been to improve response time for hierarchical engines.\nIn their architecture, both levels use an LRU eviction policy.\nThey find that the second-level cache can effectively reduce disk traffic, thus increasing the overall throughput.\nBaeza-Yates and Saint-Jean propose a three-level index organization [2].\nLong and Suel propose a caching system structured according to three different levels [9].\nThe intermediate level contains frequently occurring pairs of terms and stores the intersections of the corresponding inverted lists.\nThese last two papers are related to ours in that they exploit different caching strategies at different levels of the memory hierarchy.\nFinally, our static caching algorithm for posting lists in Section 5 uses the ratio frequency/size in order to evaluate the goodness of an item to cache.\nSimilar ideas have been used in the context of file caching [17], Web caching [5], and even caching of posting lists [9], but in all cases in a dynamic setting.\nTo the best of our knowledge we are the first to use this approach for static caching of posting lists.\n3.\nDATA CHARACTERIZATION Our data consists of a crawl of documents from the UK domain, and query logs of one year of queries submitted to http://www.yahoo.co.uk from November 2005 to November 2006.\nIn our logs, 50% of the total volume of queries are unique.\nThe average query length is 2.5 terms, with the longest query having 731 terms.\n1e-07 1e-06 1e-05 1e-04 0.001 0.01 0.1 1 1e-08 1e-07 1e-06 1e-05 1e-04 0.001 0.01 0.1 1 Frequency(normalized) Frequency rank (normalized) Figure 2: The distribution of queries (bottom curve) and query terms (middle curve) in the query log.\nThe distribution of document frequencies of terms in the UK-2006 dataset (upper curve).\nFigure 2 shows the distributions of queries (lower curve), and query terms (middle curve).\nThe x-axis represents the normalized frequency rank of the query or term.\n(The most frequent query appears closest to the y-axis.)\nThe y-axis is Table 1: Statistics of the UK-2006 sample.\nUK-2006 sample statistics # of documents 2,786,391 # of terms 6,491,374 # of tokens 2,109,512,558 the normalized frequency for a given query (or term).\nAs expected, the distribution of query frequencies and query term frequencies follow power law distributions, with slope of 1.84 and 2.26, respectively.\nIn this figure, the query frequencies were computed as they appear in the logs with no normalization for case or white space.\nThe query terms (middle curve) have been normalized for case, as have the terms in the document collection.\nThe document collection that we use for our experiments is a summary of the UK domain crawled in May 2006.1 This summary corresponds to a maximum of 400 crawled documents per host, using a breadth first crawling strategy, comprising 15GB.\nThe distribution of document frequencies of terms in the collection follows a power law distribution with slope 2.38 (upper curve in Figure 2).\nThe statistics of the collection are shown in Table 1.\nWe measured the correlation between the document frequency of terms in the collection and the number of queries that contain a particular term in the query log to be 0.424.\nA scatter plot for a random sample of terms is shown in Figure 3.\nIn this experiment, terms have been converted to lower case in both the queries and the documents so that the frequencies will be comparable.\n1e-07 1e-06 1e-05 1e-04 0.001 0.01 0.1 1 1e-06 1e-05 1e-04 0.001 0.01 0.1 1 Queryfrequency Document frequency Figure 3: Normalized scatter plot of document-term frequencies vs. query-term frequencies.\n4.\nCACHING OF QUERIES AND TERMS Caching relies upon the assumption that there is locality in the stream of requests.\nThat is, there must be sufficient repetition in the stream of requests and within intervals of time that enable a cache memory of reasonable size to be effective.\nIn the query log we used, 88% of the unique queries are singleton queries, and 44% are singleton queries out of the whole volume.\nThus, out of all queries in the stream composing the query log, the upper threshold on hit ratio is 56%.\nThis is because only 56% of all the queries comprise queries that have multiple occurrences.\nIt is important to observe, however, that not all queries in this 56% can be cache hits because of compulsory misses.\nA compulsory miss 1 The collection is available from the University of Milan: http://law.dsi.unimi.it/.\nURL retrieved 05/2007.\n0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 240\u00a0260\u00a0280\u00a0300 320\u00a0340\u00a0360 Numberofelements Bin number Total terms Terms diff Total queries Unique queries Unique terms Query diff Figure 4: Arrival rate for both terms and queries.\nhappens when the cache receives a query for the first time.\nThis is different from capacity misses, which happen due to space constraints on the amount of memory the cache uses.\nIf we consider a cache with infinite memory, then the hit ratio is 50%.\nNote that for an infinite cache there are no capacity misses.\nAs we mentioned before, another possibility is to cache the posting lists of terms.\nIntuitively, this gives more freedom in the utilization of the cache content to respond to queries because cached terms might form a new query.\nOn the other hand, they need more space.\nAs opposed to queries, the fraction of singleton terms in the total volume of terms is smaller.\nIn our query log, only 4% of the terms appear once, but this accounts for 73% of the vocabulary of query terms.\nWe show in Section 5 that caching a small fraction of terms, while accounting for terms appearing in many documents, is potentially very effective.\nFigure 4 shows several graphs corresponding to the normalized arrival rate for different cases using days as bins.\nThat is, we plot the normalized number of elements that appear in a day.\nThis graph shows only a period of 122 days, and we normalize the values by the maximum value observed throughout the whole period of the query log.\nTotal queries and Total terms correspond to the total volume of queries and terms, respectively.\nUnique queries and Unique terms correspond to the arrival rate of unique queries and terms.\nFinally, Query diff and Terms diff correspond to the difference between the curves for total and unique.\nIn Figure 4, as expected, the volume of terms is much higher than the volume of queries.\nThe difference between the total number of terms and the number of unique terms is much larger than the difference between the total number of queries and the number of unique queries.\nThis observation implies that terms repeat significantly more than queries.\nIf we use smaller bins, say of one hour, then the ratio of unique to volume is higher for both terms and queries because it leaves less room for repetition.\nWe also estimated the workload using the document frequency of terms as a measure of how much work a query imposes on a search engine.\nWe found that it follows closely the arrival rate for terms shown in Figure 4.\nTo demonstrate the effect of a dynamic cache on the query frequency distribution of Figure 2, we plot the same frequency graph, but now considering the frequency of queries Figure 5: Frequency graph after LRU cache.\nafter going through an LRU cache.\nOn a cache miss, an LRU cache decides upon an entry to evict using the information on the recency of queries.\nIn this graph, the most frequent queries are not the same queries that were most frequent before the cache.\nIt is possible that queries that are most frequent after the cache have different characteristics, and tuning the search engine to queries frequent before the cache may degrade performance for non-cached queries.\nThe maximum frequency after caching is less than 1% of the maximum frequency before the cache, thus showing that the cache is very effective in reducing the load of frequent queries.\nIf we re-rank the queries according to after-cache frequency, the distribution is still a power law, but with a much smaller value for the highest frequency.\nWhen discussing the effectiveness of dynamically caching, an important metric is cache miss rate.\nTo analyze the cache miss rate for different memory constraints, we use the working set model [6, 14].\nA working set, informally, is the set of references that an application or an operating system is currently working with.\nThe model uses such sets in a strategy that tries to capture the temporal locality of references.\nThe working set strategy then consists in keeping in memory only the elements that are referenced in the previous \u03b8 steps of the input sequence, where \u03b8 is a configurable parameter corresponding to the window size.\nOriginally, working sets have been used for page replacement algorithms of operating systems, and considering such a strategy in the context of search engines is interesting for three reasons.\nFirst, it captures the amount of locality of queries and terms in a sequence of queries.\nLocality in this case refers to the frequency of queries and terms in a window of time.\nIf many queries appear multiple times in a window, then locality is high.\nSecond, it enables an o\ufb04ine analysis of the expected miss rate given different memory constraints.\nThird, working sets capture aspects of efficient caching algorithms such as LRU.\nLRU assumes that references farther in the past are less likely to be referenced in the present, which is implicit in the concept of working sets [14].\nFigure 6 plots the miss rate for different working set sizes, and we consider working sets of both queries and terms.\nThe working set sizes are normalized against the total number of queries in the query log.\nIn the graph for queries, there is a sharp decay until approximately 0.01, and the rate at which the miss rate drops decreases as we increase the size of the working set over 0.01.\nFinally, the minimum value it reaches is 50% miss rate, not shown in the figure as we have cut the tail of the curve for presentation purposes.\n0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.05 0.1 0.15 0.2 Missrate Normalized working set size Queries Terms Figure 6: Miss rate as a function of the working set size.\n1 10\u00a0100\u00a01000\u00a010000 100000 1e+06 Frequency Distance Figure 7: Distribution of distances expressed in terms of distinct queries.\nCompared to the query curve, we observe that the minimum miss rate for terms is substantially smaller.\nThe miss rate also drops sharply on values up to 0.01, and it decreases minimally for higher values.\nThe minimum value, however, is slightly over 10%, which is much smaller than the minimum value for the sequence of queries.\nThis implies that with such a policy it is possible to achieve over 80% hit rate, if we consider caching dynamically posting lists for terms as opposed to caching answers for queries.\nThis result does not consider the space required for each unit stored in the cache memory, or the amount of time it takes to put together a response to a user query.\nWe analyze these issues more carefully later in this paper.\nIt is interesting also to observe the histogram of Figure 7, which is an intermediate step in the computation of the miss rate graph.\nIt reports the distribution of distances between repetitions of the same frequent query.\nThe distance in the plot is measured in the number of distinct queries separating a query and its repetition, and it considers only queries appearing at least 10 times.\nFrom Figures 6 and 7, we conclude that even if we set the size of the query answers cache to a relatively large number of entries, the miss rate is high.\nThus, caching the posting lists of terms has the potential to improve the hit ratio.\nThis is what we explore next.\n5.\nCACHING POSTING LISTS The previous section shows that caching posting lists can obtain a higher hit rate compared to caching query answers.\nIn this section we study the problem of how to select posting lists to place on a certain amount of available memory, assuming that the whole index is larger than the amount of memory available.\nThe posting lists have variable size (in fact, their size distribution follows a power law), so it is beneficial for a caching policy to consider the sizes of the posting lists.\nWe consider both dynamic and static caching.\nFor dynamic caching, we use two well-known policies, LRU and LFU, as well as a modified algorithm that takes posting-list size into account.\nBefore discussing the static caching strategies, we introduce some notation.\nWe use fq(t) to denote the query-term frequency of a term t, that is, the number of queries containing t in the query log, and fd(t) to denote the document frequency of t, that is, the number of documents in the collection in which the term t appears.\nThe first strategy we consider is the algorithm proposed by Baeza-Yates and Saint-Jean [2], which consists in selecting the posting lists of the terms with the highest query-term frequencies fq(t).\nWe call this algorithm Qtf.\nWe observe that there is a trade-off between fq(t) and fd(t).\nTerms with high fq(t) are useful to keep in the cache because they are queried often.\nOn the other hand, terms with high fd(t) are not good candidates because they correspond to long posting lists and consume a substantial amount of space.\nIn fact, the problem of selecting the best posting lists for the static cache corresponds to the standard Knapsack problem: given a knapsack of fixed capacity, and a set of n items, such as the i-th item has value ci and size si, select the set of items that fit in the knapsack and maximize the overall value.\nIn our case, value corresponds to fq(t) and size corresponds to fd(t).\nThus, we employ a simple algorithm for the knapsack problem, which is selecting the posting lists of the terms with the highest values of the ratio fq(t) fd(t) .\nWe call this algorithm QtfDf.\nWe tried other variations considering query frequencies instead of term frequencies, but the gain was minimal compared to the complexity added.\nIn addition to the above two static algorithms we consider the following algorithms for dynamic caching: \u2022 LRU: A standard LRU algorithm, but many posting lists might need to be evicted (in order of least-recent usage) until there is enough space in the memory to place the currently accessed posting list; \u2022 LFU: A standard LFU algorithm (eviction of the leastfrequently used), with the same modification as the LRU; \u2022 Dyn-QtfDf: A dynamic version of the QtfDf algorithm; evict from the cache the term(s) with the lowest fq(t) fd(t) ratio.\nThe performance of all the above algorithms for 15 weeks of the query log and the UK dataset are shown in Figure 8.\nPerformance is measured with hit rate.\nThe cache size is measured as a fraction of the total space required to store the posting lists of all terms.\nFor the dynamic algorithms, we load the cache with terms in order of fq(t) and we let the cache warm up for 1 million queries.\nFor the static algorithms, we assume complete knowledge of the frequencies fq(t), that is, we estimate fq(t) from the whole query stream.\nAs we show in Section 7 the results do not change much if we compute the query-term frequencies using the first 3 or 4 weeks of the query log and measure the hit rate on the rest.\n0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Hitrate Cache size Caching posting lists static QTF/DF LRU LFU Dyn-QTF/DF QTF Figure 8: Hit rate of different strategies for caching posting lists.\nThe most important observation from our experiments is that the static QtfDf algorithm has a better hit rate than all the dynamic algorithms.\nAn important benefit a static cache is that it requires no eviction and it is hence more efficient when evaluating queries.\nHowever, if the characteristics of the query traffic change frequently over time, then it requires re-populating the cache often or there will be a significant impact on hit rate.\n6.\nANALYSIS OF STATIC CACHING In this section we provide a detailed analysis for the problem of deciding whether it is preferable to cache query answers or cache posting lists.\nOur analysis takes into account the impact of caching between two levels of the data-access hierarchy.\nIt can either be applied at the memory/disk layer or at a server/remote server layer as in the architecture we discussed in the introduction.\nUsing a particular system model, we obtain estimates for the parameters required by our analysis, which we subsequently use to decide the optimal trade-off between caching query answers and caching posting lists.\n6.1 Analytical Model Let M be the size of the cache measured in answer units (the cache can store M query answers).\nAssume that all posting lists are of the same length L, measured in answer units.\nWe consider the following two cases: (A) a cache that stores only precomputed answers, and (B) a cache that stores only posting lists.\nIn the first case, Nc = M answers fit in the cache, while in the second case Np = M/L posting lists fit in the cache.\nThus, Np = Nc/L.\nNote that although posting lists require more space, we can combine terms to evaluate more queries (or partial queries).\nFor case (A), suppose that a query answer in the cache can be evaluated in 1 time unit.\nFor case (B), assume that if the posting lists of the terms of a query are in the cache then the results can be computed in TR1 time units, while if the posting lists are not in the cache then the results can be computed in TR2 time units.\nOf course TR2 > TR1.\nNow we want to compare the time to answer a stream of Q queries in both cases.\nLet Vc(Nc) be the volume of the most frequent Nc queries.\nThen, for case (A), we have an overall time TCA = Vc(Nc) + TR2(Q \u2212 Vc(Nc)).\nSimilarly, for case (B), let Vp(Np) be the number of computable queries.\nThen we have overall time TP L = TR1Vp(Np) + TR2(Q \u2212 Vp(Np)).\nWe want to check under which conditions we have TP L < TCA.\nWe have TP L \u2212 TCA = (TR2 \u2212 1)Vc(Nc) \u2212 (TR2 \u2212 TR1)Vp(Np) > 0.\nFigure 9 shows the values of Vp and Vc for our data.\nWe can see that caching answers saturates faster and for this particular data there is no additional benefit from using more than 10% of the index space for caching answers.\nAs the query distribution is a power law with parameter \u03b1 > 1, the i-th most frequent query appears with probability proportional to 1 i\u03b1 .\nTherefore, the volume Vc(n), which is the total number of the n most frequent queries, is Vc(n) = V0 n i=1 Q i\u03b1 = \u03b3nQ (0 < \u03b3n < 1).\nWe know that Vp(n) grows faster than Vc(n) and assume, based on experimental results, that the relation is of the form Vp(n) = k Vc(n)\u03b2 .\nIn the worst case, for a large cache, \u03b2 \u2192 1.\nThat is, both techniques will cache a constant fraction of the overall query volume.\nThen caching posting lists makes sense only if L(TR2 \u2212 1) k(TR2 \u2212 TR1) > 1.\nIf we use compression, we have L < L and TR1 > TR1.\nAccording to the experiments that we show later, compression is always better.\nFor a small cache, we are interested in the transient behavior and then \u03b2 > 1, as computed from our data.\nIn this case there will always be a point where TP L > TCA for a large number of queries.\nIn reality, instead of filling the cache only with answers or only with posting lists, a better strategy will be to divide the total cache space into cache for answers and cache for posting lists.\nIn such a case, there will be some queries that could be answered by both parts of the cache.\nAs the answer cache is faster, it will be the first choice for answering those queries.\nLet QNc and QNp be the set of queries that can be answered by the cached answers and the cached posting lists, respectively.\nThen, the overall time is T = Vc(Nc)+TR1V (QNp \u2212QNc )+TR2(Q\u2212V (QNp \u222aQNc )), where Np = (M \u2212 Nc)/L. Finding the optimal division of the cache in order to minimize the overall retrieval time is a difficult problem to solve analytically.\nIn Section 6.3 we use simulations to derive optimal cache trade-offs for particular implementation examples.\n6.2 Parameter Estimation We now use a particular implementation of a centralized system and the model of a distributed system as examples from which we estimate the parameters of the analysis from the previous section.\nWe perform the experiments using an optimized version of Terrier [11] for both indexing documents and processing queries, on a single machine with a Pentium 4 at 2GHz and 1GB of RAM.\nWe indexed the documents from the UK-2006 dataset, without removing stop words or applying stemming.\nThe posting lists in the inverted file consist of pairs of document identifier and term frequency.\nWe compress the document identifier gaps using Elias gamma encoding, and the 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Queryvolume Space precomputed answers posting lists Figure 9: Cache saturation as a function of size.\nTable 2: Ratios between the average time to evaluate a query and the average time to return cached answers (centralized and distributed case).\nCentralized system TR1 TR2 TR1 TR2 Full evaluation 233\u00a01760\u00a0707\u00a01140 Partial evaluation 99\u00a01626\u00a0493\u00a0798 LAN system TRL 1 TRL 2 TR L 1 TR L 2 Full evaluation 242\u00a01769\u00a0716\u00a01149 Partial evaluation 108\u00a01635\u00a0502\u00a0807 WAN system TRW 1 TRW 2 TR W 1 TR W 2 Full evaluation 5001\u00a06528\u00a05475\u00a05908 Partial evaluation 4867\u00a06394\u00a05270\u00a05575 term frequencies in documents using unary encoding [16].\nThe size of the inverted file is 1,189Mb.\nA stored answer requires 1264 bytes, and an uncompressed posting takes 8 bytes.\nFrom Table 1, we obtain L = (8\u00b7# of postings) 1264\u00b7# of terms = 0.75 and L = Inverted file size 1264\u00b7# of terms = 0.26.\nWe estimate the ratio TR = T/Tc between the average time T it takes to evaluate a query and the average time Tc it takes to return a stored answer for the same query, in the following way.\nTc is measured by loading the answers for 100,000 queries in memory, and answering the queries from memory.\nThe average time is Tc = 0.069ms. T is measured by processing the same 100,000 queries (the first 10,000 queries are used to warm-up the system).\nFor each query, we remove stop words, if there are at least three remaining terms.\nThe stop words correspond to the terms with a frequency higher than the number of documents in the index.\nWe use a document-at-a-time approach to retrieve documents containing all query terms.\nThe only disk access required during query processing is for reading compressed posting lists from the inverted file.\nWe perform both full and partial evaluation of answers, because some queries are likely to retrieve a large number of documents, and only a fraction of the retrieved documents will be seen by users.\nIn the partial evaluation of queries, we terminate the processing after matching 10,000 documents.\nThe estimated ratios TR are presented in Table 2.\nFigure 10 shows for a sample of queries the workload of the system with partial query evaluation and compressed posting lists.\nThe x-axis corresponds to the total time the system spends processing a particular query, and the vertical axis corresponds to the sum t\u2208q fq \u00b7 fd(t).\nNotice that the total number of postings of the query-terms does not necessarily provide an accurate estimate of the workload imposed on the system by a query (which is the case for full evaluation and uncompressed lists).\n0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Totalpostingstoprocessquery(normalized) Total time to process query (normalized) Partial processing of compressed postings query len = 1 query len in [2,3] query len in [4,8] query len > 8 Figure 10: Workload for partial query evaluation with compressed posting lists.\nThe analysis of the previous section also applies to a distributed retrieval system in one or multiple sites.\nSuppose that a document partitioned distributed system is running on a cluster of machines interconnected with a local area network (LAN) in one site.\nThe broker receives queries and broadcasts them to the query processors, which answer the queries and return the results to the broker.\nFinally, the broker merges the received answers and generates the final set of answers (we assume that the time spent on merging results is negligible).\nThe difference between the centralized architecture and the document partition architecture is the extra communication between the broker and the query processors.\nUsing ICMP pings on a 100Mbps LAN, we have measured that sending the query from the broker to the query processors which send an answer of 4,000 bytes back to the broker takes on average 0.615ms. Hence, TRL = TR + 0.615ms/0.069ms = TR + 9.\nIn the case when the broker and the query processors are in different sites connected with a wide area network (WAN), we estimated that broadcasting the query from the broker to the query processors and getting back an answer of 4,000 bytes takes on average 329ms.\nHence, TRW = TR + 329ms/0.069ms = TR + 4768.\n6.3 Simulation Results We now address the problem of finding the optimal tradeoff between caching query answers and caching posting lists.\nTo make the problem concrete we assume a fixed budget M on the available memory, out of which x units are used for caching query answers and M \u2212 x for caching posting lists.\nWe perform simulations and compute the average response time as a function of x. Using a part of the query log as training data, we first allocate in the cache the answers to the most frequent queries that fit in space x, and then we use the rest of the memory to cache posting lists.\nFor selecting posting lists we use the QtfDf algorithm, applied to the training query log but excluding the queries that have already been cached.\nIn Figure 11, we plot the simulated response time for a centralized system as a function of x. For the uncompressed index we use M = 1GB, and for the compressed index we use M = 0.5GB.\nIn the case of the configuration that uses partial query evaluation with compressed posting lists, the lowest response time is achieved when 0.15GB out of the 0.5GB is allocated for storing answers for queries.\nWe obtained similar trends in the results for the LAN setting.\nFigure 12 shows the simulated workload for a distributed system across a WAN.\nIn this case, the total amount of memory is split between the broker, which holds the cached 400 500 600 700 800 900 1000 1100 1200 0 0.2 0.4 0.6 0.8 1 Averageresponsetime Space (GB) Simulated workload -- single machine full / uncompr / 1 G partial / uncompr / 1 G full / compr / 0.5 G partial / compr / 0.5 G Figure 11: Optimal division of the cache in a server.\n3000 3500 4000 4500 5000 5500 6000 0 0.2 0.4 0.6 0.8 1 Averageresponsetime Space (GB) Simulated workload -- WAN full / uncompr / 1 G partial / uncompr / 1 G full / compr / 0.5 G partial / compr / 0.5 G Figure 12: Optimal division of the cache when the next level requires WAN access.\nanswers of queries, and the query processors, which hold the cache of posting lists.\nAccording to the figure, the difference between the configurations of the query processors is less important because the network communication overhead increases the response time substantially.\nWhen using uncompressed posting lists, the optimal allocation of memory corresponds to using approximately 70% of the memory for caching query answers.\nThis is explained by the fact that there is no need for network communication when the query can be answered by the cache at the broker.\n7.\nEFFECT OF THE QUERY DYNAMICS For our query log, the query distribution and query-term distribution change slowly over time.\nTo support this claim, we first assess how topics change comparing the distribution of queries from the first week in June, 2006, to the distribution of queries for the remainder of 2006 that did not appear in the first week in June.\nWe found that a very small percentage of queries are new queries.\nThe majority of queries that appear in a given week repeat in the following weeks for the next six months.\nWe then compute the hit rate of a static cache of 128, 000 answers trained over a period of two weeks (Figure 13).\nWe report hit rate hourly for 7 days, starting from 5pm.\nWe observe that the hit rate reaches its highest value during the night (around midnight), whereas around 2-3pm it reaches its minimum.\nAfter a small decay in hit rate values, the hit rate stabilizes between 0.28, and 0.34 for the entire week, suggesting that the static cache is effective for a whole week after the training period.\n0.26 0.27 0.28 0.29 0.3 0.31 0.32 0.33 0.34 0.35 0.36 0.37 0 20 40 60\u00a080\u00a0100\u00a0120 140 160 Hit-rate Time Hits on the frequent queries of distances Figure 13: Hourly hit rate for a static cache holding 128,000 answers during the period of a week.\nThe static cache of posting lists can be periodically recomputed.\nTo estimate the time interval in which we need to recompute the posting lists on the static cache we need to consider an efficiency/quality trade-off: using too short a time interval might be prohibitively expensive, while recomputing the cache too infrequently might lead to having an obsolete cache not corresponding to the statistical characteristics of the current query stream.\nWe measured the effect on the QtfDf algorithm of the changes in a 15-week query stream (Figure 14).\nWe compute the query term frequencies over the whole stream, select which terms to cache, and then compute the hit rate on the whole query stream.\nThis hit rate is as an upper bound, and it assumes perfect knowledge of the query term frequencies.\nTo simulate a realistic scenario, we use the first 6 (3) weeks of the query stream for computing query term frequencies and the following 9 (12) weeks to estimate the hit rate.\nAs Figure 14 shows, the hit rate decreases by less than 2%.\nThe high correlation among the query term frequencies during different time periods explains the graceful adaptation of the static caching algorithms to the future query stream.\nIndeed, the pairwise correlation among all possible 3-week periods of the 15-week query stream is over 99.5%.\n8.\nCONCLUSIONS Caching is an effective technique in search engines for improving response time, reducing the load on query processors, and improving network bandwidth utilization.\nWe present results on both dynamic and static caching.\nDynamic caching of queries has limited effectiveness due to the high number of compulsory misses caused by the number of unique or infrequent queries.\nOur results show that in our UK log, the minimum miss rate is 50% using a working set strategy.\nCaching terms is more effective with respect to miss rate, achieving values as low as 12%.\nWe also propose a new algorithm for static caching of posting lists that outperforms previous static caching algorithms as well as dynamic algorithms such as LRU and LFU, obtaining hit rate values that are over 10% higher compared these strategies.\nWe present a framework for the analysis of the trade-off between caching query results and caching posting lists, and we simulate different types of architectures.\nOur results show that for centralized and LAN environments, there is an optimal allocation of caching query results and caching of posting lists, while for WAN scenarios in which network time prevails it is more important to cache query results.\n0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Hitrate Cache size Dynamics of static QTF/DF caching policy perfect knowledge 6-week training 3-week training Figure 14: Impact of distribution changes on the static caching of posting lists.\n9.\nREFERENCES [1] V. N. Anh and A. Moffat.\nPruned query evaluation using pre-computed impacts.\nIn ACM CIKM, 2006.\n[2] R. A. Baeza-Yates and F. Saint-Jean.\nA three level search engine index based in query log distribution.\nIn SPIRE, 2003.\n[3] C. Buckley and A. F. Lewit.\nOptimization of inverted vector searches.\nIn ACM SIGIR, 1985.\n[4] S. B\u00a8uttcher and C. L. A. Clarke.\nA document-centric approach to static index pruning in text retrieval systems.\nIn ACM CIKM, 2006.\n[5] P. Cao and S. Irani.\nCost-aware WWW proxy caching algorithms.\nIn USITS, 1997.\n[6] P. Denning.\nWorking sets past and present.\nIEEE Trans.\non Software Engineering, SE-6(1):64-84, 1980.\n[7] T. Fagni, R. Perego, F. Silvestri, and S. Orlando.\nBoosting the performance of web search engines: Caching and prefetching query results by exploiting historical usage data.\nACM Trans.\nInf.\nSyst., 24(1):51-78, 2006.\n[8] R. Lempel and S. Moran.\nPredictive caching and prefetching of query results in search engines.\nIn WWW, 2003.\n[9] X. Long and T. Suel.\nThree-level caching for efficient query processing in large web search engines.\nIn WWW, 2005.\n[10] E. P. Markatos.\nOn caching search engine query results.\nComputer Communications, 24(2):137-143, 2001.\n[11] I. Ounis, G. Amati, V. Plachouras, B. He, C. Macdonald, and C. Lioma.\nTerrier: A High Performance and Scalable Information Retrieval Platform.\nIn SIGIR Workshop on Open Source Information Retrieval, 2006.\n[12] V. V. Raghavan and H. Sever.\nOn the reuse of past optimal queries.\nIn ACM SIGIR, 1995.\n[13] P. C. Saraiva, E. S. de Moura, N. Ziviani, W. Meira, R. Fonseca, and B. Riberio-Neto.\nRank-preserving two-level caching for scalable search engines.\nIn ACM SIGIR, 2001.\n[14] D. R. Slutz and I. L. Traiger.\nA note on the calculation of average working set size.\nCommunications of the ACM, 17(10):563-565, 1974.\n[15] T. Strohman, H. Turtle, and W. B. Croft.\nOptimization strategies for complex queries.\nIn ACM SIGIR, 2005.\n[16] I. H. Witten, T. C. Bell, and A. Moffat.\nManaging Gigabytes: Compressing and Indexing Documents and Images.\nJohn Wiley & Sons, Inc., NY, 1994.\n[17] N. E. Young.\nOn-line file caching.\nAlgorithmica, 33(3):371-383, 2002.", "lvl-3": "The Impact of Caching on Search Engines\nABSTRACT\nIn this paper we study the trade-offs in designing efficient caching systems for Web search engines .\nWe explore the impact of different approaches , such as static vs. dynamic caching , and caching query results vs. caching posting lists .\nUsing a query log spanning a whole year we explore the limitations of caching and we demonstrate that caching posting lists can achieve higher hit rates than caching query answers .\nWe propose a new algorithm for static caching of posting lists , which outperforms previous methods .\nWe also study the problem of finding the optimal way to split the static cache between answers and posting lists .\nFinally , we measure how the changes in the query log affect the effectiveness of static caching , given our observation that the distribution of the queries changes slowly over time .\nOur results and observations are applicable to different levels of the data-access hierarchy , for instance , for a memory/disk layer or a broker/remote server layer .\n1 .\nINTRODUCTION\nMillions of queries are submitted daily to Web search engines , and users have high expectations of the quality and\nspeed of the answers .\nAs the searchable Web becomes larger and larger , with more than 20 billion pages to index , evaluating a single query requires processing large amounts of data .\nIn such a setting , to achieve a fast response time and to increase the query throughput , using a cache is crucial .\nThe primary use of a cache memory is to speedup computation by exploiting frequently or recently used data , although reducing the workload to back-end servers is also a major goal .\nCaching can be applied at different levels with increasing response latencies or processing requirements .\nFor example , the different levels may correspond to the main memory , the disk , or resources in a local or a wide area network .\nThe decision of what to cache is either off-line ( static ) or online ( dynamic ) .\nA static cache is based on historical information and is periodically updated .\nA dynamic cache replaces entries according to the sequence of requests .\nWhen a new request arrives , the cache system decides whether to evict some entry from the cache in the case of a cache miss .\nSuch online decisions are based on a cache policy , and several different policies have been studied in the past .\nFor a search engine , there are two possible ways to use a cache memory : Caching answers : As the engine returns answers to a particular query , it may decide to store these answers to resolve future queries .\nCaching terms : As the engine evaluates a particular query , it may decide to store in memory the posting lists of the involved query terms .\nOften the whole set of posting lists does not fit in memory , and consequently , the engine has to select a small set to keep in memory and speed up query processing .\nReturning an answer to a query that already exists in the cache is more efficient than computing the answer using cached posting lists .\nOn the other hand , previously unseen queries occur more often than previously unseen terms , implying a higher miss rate for cached answers .\nCaching of posting lists has additional challenges .\nAs posting lists have variable size , caching them dynamically is not very efficient , due to the complexity in terms of efficiency and space , and the skewed distribution of the query stream , as shown later .\nStatic caching of posting lists poses even more challenges : when deciding which terms to cache one faces the trade-off between frequently queried terms and terms with small posting lists that are space efficient .\nFinally , before deciding to adopt a static caching policy the query stream should be analyzed to verify that its characteristics do not change rapidly over time .\nFigure 1 : One caching level in a distributed search architecture .\nIn this paper we explore the trade-offs in the design of each cache level , showing that the problem is the same and only a few parameters change .\nIn general , we assume that each level of caching in a distributed search architecture is similar to that shown in Figure 1 .\nWe use a query log spanning a whole year to explore the limitations of dynamically caching query answers or posting lists for query terms .\nMore concretely , our main conclusions are that :\n\u2022 Caching query answers results in lower hit ratios compared to caching of posting lists for query terms , but it is faster because there is no need for query evaluation .\nWe provide a framework for the analysis of the trade-off between static caching of query answers and posting lists ; \u2022 Static caching of terms can be more effective than dynamic caching with , for example , LRU .\nWe provide algorithms based on the KNAPSACK problem for selecting the posting lists to put in a static cache , and we show improvements over previous work , achieving a hit ratio over 90 % ; \u2022 Changes of the query distribution over time have little impact on static caching .\nThe remainder of this paper is organized as follows .\nSections 2 and 3 summarize related work and characterize the data sets we use .\nSection 4 discusses the limitations of dynamic caching .\nSections 5 and 6 introduce algorithms for caching posting lists , and a theoretical framework for the analysis of static caching , respectively .\nSection 7 discusses the impact of changes in the query distribution on static caching , and Section 8 provides concluding remarks .\n2 .\nRELATED WORK\nThere is a large body of work devoted to query optimization .\nBuckley and Lewit [ 3 ] , in one of the earliest works , take a term-at-a-time approach to deciding when inverted lists need not be further examined .\nMore recent examples demonstrate that the top k documents for a query can be returned without the need for evaluating the complete set of posting lists [ 1 , 4 , 15 ] .\nAlthough these approaches seek to improve query processing efficiency , they differ from our current work in that they do not consider caching .\nThey may be considered separate and complementary to a cache-based approach .\nRaghavan and Sever [ 12 ] , in one of the first papers on exploiting user query history , propose using a query base , built upon a set of persistent `` optimal '' queries submitted in the past , to improve the retrieval effectiveness for similar future queries .\nMarkatos [ 10 ] shows the existence of temporal locality in queries , and compares the performance of different caching policies .\nBased on the observations of Markatos , Lempel and Moran propose a new caching policy , called Probabilistic Driven Caching , by attempting to estimate the probability distribution of all possible queries submitted to a search engine [ 8 ] .\nFagni et al. follow Markatos ' work by showing that combining static and dynamic caching policies together with an adaptive prefetching policy achieves a high hit ratio [ 7 ] .\nDifferent from our work , they consider caching and prefetching of pages of results .\nAs systems are often hierarchical , there has also been some effort on multi-level architectures .\nSaraiva et al. propose a new architecture for Web search engines using a two-level dynamic caching system [ 13 ] .\nTheir goal for such systems has been to improve response time for hierarchical engines .\nIn their architecture , both levels use an LRU eviction policy .\nThey find that the second-level cache can effectively reduce disk traffic , thus increasing the overall throughput .\nBaeza-Yates and Saint-Jean propose a three-level index organization [ 2 ] .\nLong and Suel propose a caching system structured according to three different levels [ 9 ] .\nThe intermediate level contains frequently occurring pairs of terms and stores the intersections of the corresponding inverted lists .\nThese last two papers are related to ours in that they exploit different caching strategies at different levels of the memory hierarchy .\nFinally , our static caching algorithm for posting lists in Section 5 uses the ratio frequency/size in order to evaluate the goodness of an item to cache .\nSimilar ideas have been used in the context of file caching [ 17 ] , Web caching [ 5 ] , and even caching of posting lists [ 9 ] , but in all cases in a dynamic setting .\nTo the best of our knowledge we are the first to use this approach for static caching of posting lists .\n3 .\nDATA CHARACTERIZATION\n4 .\nCACHING OF QUERIES AND TERMS\n5 .\nCACHING POSTING LISTS\n6 .\nANALYSIS OF STATIC CACHING\n6.1 Analytical Model\n6.2 Parameter Estimation\n6.3 Simulation Results\n7 .\nEFFECT OF THE QUERY DYNAMICS\n8 .\nCONCLUSIONS\nCaching is an effective technique in search engines for improving response time , reducing the load on query processors , and improving network bandwidth utilization .\nWe present results on both dynamic and static caching .\nDynamic caching of queries has limited effectiveness due to the high number of compulsory misses caused by the number of unique or infrequent queries .\nOur results show that in our UK log , the minimum miss rate is 50 % using a working set strategy .\nCaching terms is more effective with respect to miss rate , achieving values as low as 12 % .\nWe also propose a new algorithm for static caching of posting lists that outperforms previous static caching algorithms as well as dynamic algorithms such as LRU and LFU , obtaining hit rate values that are over 10 % higher compared these strategies .\nWe present a framework for the analysis of the trade-off between caching query results and caching posting lists , and we simulate different types of architectures .\nOur results show that for centralized and LAN environments , there is an optimal allocation of caching query results and caching of posting lists , while for WAN scenarios in which network time prevails it is more important to cache query results .\nFigure 14 : Impact of distribution changes on the static caching of posting lists .", "lvl-4": "The Impact of Caching on Search Engines\nABSTRACT\nIn this paper we study the trade-offs in designing efficient caching systems for Web search engines .\nWe explore the impact of different approaches , such as static vs. dynamic caching , and caching query results vs. caching posting lists .\nUsing a query log spanning a whole year we explore the limitations of caching and we demonstrate that caching posting lists can achieve higher hit rates than caching query answers .\nWe propose a new algorithm for static caching of posting lists , which outperforms previous methods .\nWe also study the problem of finding the optimal way to split the static cache between answers and posting lists .\nFinally , we measure how the changes in the query log affect the effectiveness of static caching , given our observation that the distribution of the queries changes slowly over time .\nOur results and observations are applicable to different levels of the data-access hierarchy , for instance , for a memory/disk layer or a broker/remote server layer .\n1 .\nINTRODUCTION\nMillions of queries are submitted daily to Web search engines , and users have high expectations of the quality and\nspeed of the answers .\nIn such a setting , to achieve a fast response time and to increase the query throughput , using a cache is crucial .\nCaching can be applied at different levels with increasing response latencies or processing requirements .\nThe decision of what to cache is either off-line ( static ) or online ( dynamic ) .\nA static cache is based on historical information and is periodically updated .\nA dynamic cache replaces entries according to the sequence of requests .\nWhen a new request arrives , the cache system decides whether to evict some entry from the cache in the case of a cache miss .\nSuch online decisions are based on a cache policy , and several different policies have been studied in the past .\nFor a search engine , there are two possible ways to use a cache memory : Caching answers : As the engine returns answers to a particular query , it may decide to store these answers to resolve future queries .\nCaching terms : As the engine evaluates a particular query , it may decide to store in memory the posting lists of the involved query terms .\nOften the whole set of posting lists does not fit in memory , and consequently , the engine has to select a small set to keep in memory and speed up query processing .\nReturning an answer to a query that already exists in the cache is more efficient than computing the answer using cached posting lists .\nOn the other hand , previously unseen queries occur more often than previously unseen terms , implying a higher miss rate for cached answers .\nCaching of posting lists has additional challenges .\nAs posting lists have variable size , caching them dynamically is not very efficient , due to the complexity in terms of efficiency and space , and the skewed distribution of the query stream , as shown later .\nStatic caching of posting lists poses even more challenges : when deciding which terms to cache one faces the trade-off between frequently queried terms and terms with small posting lists that are space efficient .\nFinally , before deciding to adopt a static caching policy the query stream should be analyzed to verify that its characteristics do not change rapidly over time .\nFigure 1 : One caching level in a distributed search architecture .\nIn this paper we explore the trade-offs in the design of each cache level , showing that the problem is the same and only a few parameters change .\nIn general , we assume that each level of caching in a distributed search architecture is similar to that shown in Figure 1 .\nWe use a query log spanning a whole year to explore the limitations of dynamically caching query answers or posting lists for query terms .\nMore concretely , our main conclusions are that :\n\u2022 Caching query answers results in lower hit ratios compared to caching of posting lists for query terms , but it is faster because there is no need for query evaluation .\nWe provide a framework for the analysis of the trade-off between static caching of query answers and posting lists ; \u2022 Static caching of terms can be more effective than dynamic caching with , for example , LRU .\nWe provide algorithms based on the KNAPSACK problem for selecting the posting lists to put in a static cache , and we show improvements over previous work , achieving a hit ratio over 90 % ; \u2022 Changes of the query distribution over time have little impact on static caching .\nSections 2 and 3 summarize related work and characterize the data sets we use .\nSection 4 discusses the limitations of dynamic caching .\nSections 5 and 6 introduce algorithms for caching posting lists , and a theoretical framework for the analysis of static caching , respectively .\nSection 7 discusses the impact of changes in the query distribution on static caching , and Section 8 provides concluding remarks .\n2 .\nRELATED WORK\nThere is a large body of work devoted to query optimization .\nMore recent examples demonstrate that the top k documents for a query can be returned without the need for evaluating the complete set of posting lists [ 1 , 4 , 15 ] .\nAlthough these approaches seek to improve query processing efficiency , they differ from our current work in that they do not consider caching .\nMarkatos [ 10 ] shows the existence of temporal locality in queries , and compares the performance of different caching policies .\nFagni et al. follow Markatos ' work by showing that combining static and dynamic caching policies together with an adaptive prefetching policy achieves a high hit ratio [ 7 ] .\nDifferent from our work , they consider caching and prefetching of pages of results .\nSaraiva et al. propose a new architecture for Web search engines using a two-level dynamic caching system [ 13 ] .\nTheir goal for such systems has been to improve response time for hierarchical engines .\nIn their architecture , both levels use an LRU eviction policy .\nThey find that the second-level cache can effectively reduce disk traffic , thus increasing the overall throughput .\nLong and Suel propose a caching system structured according to three different levels [ 9 ] .\nThe intermediate level contains frequently occurring pairs of terms and stores the intersections of the corresponding inverted lists .\nThese last two papers are related to ours in that they exploit different caching strategies at different levels of the memory hierarchy .\nFinally , our static caching algorithm for posting lists in Section 5 uses the ratio frequency/size in order to evaluate the goodness of an item to cache .\nSimilar ideas have been used in the context of file caching [ 17 ] , Web caching [ 5 ] , and even caching of posting lists [ 9 ] , but in all cases in a dynamic setting .\nTo the best of our knowledge we are the first to use this approach for static caching of posting lists .\n8 .\nCONCLUSIONS\nCaching is an effective technique in search engines for improving response time , reducing the load on query processors , and improving network bandwidth utilization .\nWe present results on both dynamic and static caching .\nDynamic caching of queries has limited effectiveness due to the high number of compulsory misses caused by the number of unique or infrequent queries .\nOur results show that in our UK log , the minimum miss rate is 50 % using a working set strategy .\nCaching terms is more effective with respect to miss rate , achieving values as low as 12 % .\nWe also propose a new algorithm for static caching of posting lists that outperforms previous static caching algorithms as well as dynamic algorithms such as LRU and LFU , obtaining hit rate values that are over 10 % higher compared these strategies .\nWe present a framework for the analysis of the trade-off between caching query results and caching posting lists , and we simulate different types of architectures .\nOur results show that for centralized and LAN environments , there is an optimal allocation of caching query results and caching of posting lists , while for WAN scenarios in which network time prevails it is more important to cache query results .\nFigure 14 : Impact of distribution changes on the static caching of posting lists .", "lvl-2": "The Impact of Caching on Search Engines\nABSTRACT\nIn this paper we study the trade-offs in designing efficient caching systems for Web search engines .\nWe explore the impact of different approaches , such as static vs. dynamic caching , and caching query results vs. caching posting lists .\nUsing a query log spanning a whole year we explore the limitations of caching and we demonstrate that caching posting lists can achieve higher hit rates than caching query answers .\nWe propose a new algorithm for static caching of posting lists , which outperforms previous methods .\nWe also study the problem of finding the optimal way to split the static cache between answers and posting lists .\nFinally , we measure how the changes in the query log affect the effectiveness of static caching , given our observation that the distribution of the queries changes slowly over time .\nOur results and observations are applicable to different levels of the data-access hierarchy , for instance , for a memory/disk layer or a broker/remote server layer .\n1 .\nINTRODUCTION\nMillions of queries are submitted daily to Web search engines , and users have high expectations of the quality and\nspeed of the answers .\nAs the searchable Web becomes larger and larger , with more than 20 billion pages to index , evaluating a single query requires processing large amounts of data .\nIn such a setting , to achieve a fast response time and to increase the query throughput , using a cache is crucial .\nThe primary use of a cache memory is to speedup computation by exploiting frequently or recently used data , although reducing the workload to back-end servers is also a major goal .\nCaching can be applied at different levels with increasing response latencies or processing requirements .\nFor example , the different levels may correspond to the main memory , the disk , or resources in a local or a wide area network .\nThe decision of what to cache is either off-line ( static ) or online ( dynamic ) .\nA static cache is based on historical information and is periodically updated .\nA dynamic cache replaces entries according to the sequence of requests .\nWhen a new request arrives , the cache system decides whether to evict some entry from the cache in the case of a cache miss .\nSuch online decisions are based on a cache policy , and several different policies have been studied in the past .\nFor a search engine , there are two possible ways to use a cache memory : Caching answers : As the engine returns answers to a particular query , it may decide to store these answers to resolve future queries .\nCaching terms : As the engine evaluates a particular query , it may decide to store in memory the posting lists of the involved query terms .\nOften the whole set of posting lists does not fit in memory , and consequently , the engine has to select a small set to keep in memory and speed up query processing .\nReturning an answer to a query that already exists in the cache is more efficient than computing the answer using cached posting lists .\nOn the other hand , previously unseen queries occur more often than previously unseen terms , implying a higher miss rate for cached answers .\nCaching of posting lists has additional challenges .\nAs posting lists have variable size , caching them dynamically is not very efficient , due to the complexity in terms of efficiency and space , and the skewed distribution of the query stream , as shown later .\nStatic caching of posting lists poses even more challenges : when deciding which terms to cache one faces the trade-off between frequently queried terms and terms with small posting lists that are space efficient .\nFinally , before deciding to adopt a static caching policy the query stream should be analyzed to verify that its characteristics do not change rapidly over time .\nFigure 1 : One caching level in a distributed search architecture .\nIn this paper we explore the trade-offs in the design of each cache level , showing that the problem is the same and only a few parameters change .\nIn general , we assume that each level of caching in a distributed search architecture is similar to that shown in Figure 1 .\nWe use a query log spanning a whole year to explore the limitations of dynamically caching query answers or posting lists for query terms .\nMore concretely , our main conclusions are that :\n\u2022 Caching query answers results in lower hit ratios compared to caching of posting lists for query terms , but it is faster because there is no need for query evaluation .\nWe provide a framework for the analysis of the trade-off between static caching of query answers and posting lists ; \u2022 Static caching of terms can be more effective than dynamic caching with , for example , LRU .\nWe provide algorithms based on the KNAPSACK problem for selecting the posting lists to put in a static cache , and we show improvements over previous work , achieving a hit ratio over 90 % ; \u2022 Changes of the query distribution over time have little impact on static caching .\nThe remainder of this paper is organized as follows .\nSections 2 and 3 summarize related work and characterize the data sets we use .\nSection 4 discusses the limitations of dynamic caching .\nSections 5 and 6 introduce algorithms for caching posting lists , and a theoretical framework for the analysis of static caching , respectively .\nSection 7 discusses the impact of changes in the query distribution on static caching , and Section 8 provides concluding remarks .\n2 .\nRELATED WORK\nThere is a large body of work devoted to query optimization .\nBuckley and Lewit [ 3 ] , in one of the earliest works , take a term-at-a-time approach to deciding when inverted lists need not be further examined .\nMore recent examples demonstrate that the top k documents for a query can be returned without the need for evaluating the complete set of posting lists [ 1 , 4 , 15 ] .\nAlthough these approaches seek to improve query processing efficiency , they differ from our current work in that they do not consider caching .\nThey may be considered separate and complementary to a cache-based approach .\nRaghavan and Sever [ 12 ] , in one of the first papers on exploiting user query history , propose using a query base , built upon a set of persistent `` optimal '' queries submitted in the past , to improve the retrieval effectiveness for similar future queries .\nMarkatos [ 10 ] shows the existence of temporal locality in queries , and compares the performance of different caching policies .\nBased on the observations of Markatos , Lempel and Moran propose a new caching policy , called Probabilistic Driven Caching , by attempting to estimate the probability distribution of all possible queries submitted to a search engine [ 8 ] .\nFagni et al. follow Markatos ' work by showing that combining static and dynamic caching policies together with an adaptive prefetching policy achieves a high hit ratio [ 7 ] .\nDifferent from our work , they consider caching and prefetching of pages of results .\nAs systems are often hierarchical , there has also been some effort on multi-level architectures .\nSaraiva et al. propose a new architecture for Web search engines using a two-level dynamic caching system [ 13 ] .\nTheir goal for such systems has been to improve response time for hierarchical engines .\nIn their architecture , both levels use an LRU eviction policy .\nThey find that the second-level cache can effectively reduce disk traffic , thus increasing the overall throughput .\nBaeza-Yates and Saint-Jean propose a three-level index organization [ 2 ] .\nLong and Suel propose a caching system structured according to three different levels [ 9 ] .\nThe intermediate level contains frequently occurring pairs of terms and stores the intersections of the corresponding inverted lists .\nThese last two papers are related to ours in that they exploit different caching strategies at different levels of the memory hierarchy .\nFinally , our static caching algorithm for posting lists in Section 5 uses the ratio frequency/size in order to evaluate the goodness of an item to cache .\nSimilar ideas have been used in the context of file caching [ 17 ] , Web caching [ 5 ] , and even caching of posting lists [ 9 ] , but in all cases in a dynamic setting .\nTo the best of our knowledge we are the first to use this approach for static caching of posting lists .\n3 .\nDATA CHARACTERIZATION\nOur data consists of a crawl of documents from the UK domain , and query logs of one year of queries submitted to http://www.yahoo.co.uk from November 2005 to November 2006 .\nIn our logs , 50 % of the total volume of queries are unique .\nThe average query length is 2.5 terms , with the longest query having 731 terms .\nFigure 2 : The distribution of queries ( bottom curve )\nand query terms ( middle curve ) in the query log .\nThe distribution of document frequencies of terms in the UK-2006 dataset ( upper curve ) .\nFigure 2 shows the distributions of queries ( lower curve ) , and query terms ( middle curve ) .\nThe x-axis represents the normalized frequency rank of the query or term .\n( The most frequent query appears closest to the y-axis . )\nThe y-axis is\nTable 1 : Statistics of the UK-2006 sample .\nthe normalized frequency for a given query ( or term ) .\nAs expected , the distribution of query frequencies and query term frequencies follow power law distributions , with slope of 1.84 and 2.26 , respectively .\nIn this figure , the query frequencies were computed as they appear in the logs with no normalization for case or white space .\nThe query terms ( middle curve ) have been normalized for case , as have the terms in the document collection .\nThe document collection that we use for our experiments is a summary of the UK domain crawled in May 2006.1 This summary corresponds to a maximum of 400 crawled documents per host , using a breadth first crawling strategy , comprising 15GB .\nThe distribution of document frequencies of terms in the collection follows a power law distribution with slope 2.38 ( upper curve in Figure 2 ) .\nThe statistics of the collection are shown in Table 1 .\nWe measured the correlation between the document frequency of terms in the collection and the number of queries that contain a particular term in the query log to be 0.424 .\nA scatter plot for a random sample of terms is shown in Figure 3 .\nIn this experiment , terms have been converted to lower case in both the queries and the documents so that the frequencies will be comparable .\nFigure 3 : Normalized scatter plot of document-term frequencies vs. query-term frequencies .\n4 .\nCACHING OF QUERIES AND TERMS\nCaching relies upon the assumption that there is locality in the stream of requests .\nThat is , there must be sufficient repetition in the stream of requests and within intervals of time that enable a cache memory of reasonable size to be effective .\nIn the query log we used , 88 % of the unique queries are singleton queries , and 44 % are singleton queries out of the whole volume .\nThus , out of all queries in the stream composing the query log , the upper threshold on hit ratio is 56 % .\nThis is because only 56 % of all the queries comprise queries that have multiple occurrences .\nIt is important to observe , however , that not all queries in this 56 % can be cache hits because of compulsory misses .\nA compulsory miss 1The collection is available from the University of Milan : http://law.dsi.unimi.it/ .\nURL retrieved 05/2007 .\nFigure 4 : Arrival rate for both terms and queries .\nhappens when the cache receives a query for the first time .\nThis is different from capacity misses , which happen due to space constraints on the amount of memory the cache uses .\nIf we consider a cache with infinite memory , then the hit ratio is 50 % .\nNote that for an infinite cache there are no capacity misses .\nAs we mentioned before , another possibility is to cache the posting lists of terms .\nIntuitively , this gives more freedom in the utilization of the cache content to respond to queries because cached terms might form a new query .\nOn the other hand , they need more space .\nAs opposed to queries , the fraction of singleton terms in the total volume of terms is smaller .\nIn our query log , only 4 % of the terms appear once , but this accounts for 73 % of the vocabulary of query terms .\nWe show in Section 5 that caching a small fraction of terms , while accounting for terms appearing in many documents , is potentially very effective .\nFigure 4 shows several graphs corresponding to the normalized arrival rate for different cases using days as bins .\nThat is , we plot the normalized number of elements that appear in a day .\nThis graph shows only a period of 122 days , and we normalize the values by the maximum value observed throughout the whole period of the query log .\n`` Total queries '' and `` Total terms '' correspond to the total volume of queries and terms , respectively .\n`` Unique queries '' and `` Unique terms '' correspond to the arrival rate of unique queries and terms .\nFinally , `` Query diff '' and `` Terms diff '' correspond to the difference between the curves for total and unique .\nIn Figure 4 , as expected , the volume of terms is much higher than the volume of queries .\nThe difference between the total number of terms and the number of unique terms is much larger than the difference between the total number of queries and the number of unique queries .\nThis observation implies that terms repeat significantly more than queries .\nIf we use smaller bins , say of one hour , then the ratio of unique to volume is higher for both terms and queries because it leaves less room for repetition .\nWe also estimated the workload using the document frequency of terms as a measure of how much work a query imposes on a search engine .\nWe found that it follows closely the arrival rate for terms shown in Figure 4 .\nTo demonstrate the effect of a dynamic cache on the query frequency distribution of Figure 2 , we plot the same frequency graph , but now considering the frequency of queries\nFigure 5 : Frequency graph after LRU cache .\nafter going through an LRU cache .\nOn a cache miss , an LRU cache decides upon an entry to evict using the information on the recency of queries .\nIn this graph , the most frequent queries are not the same queries that were most frequent before the cache .\nIt is possible that queries that are most frequent after the cache have different characteristics , and tuning the search engine to queries frequent before the cache may degrade performance for non-cached queries .\nThe maximum frequency after caching is less than 1 % of the maximum frequency before the cache , thus showing that the cache is very effective in reducing the load of frequent queries .\nIf we re-rank the queries according to after-cache frequency , the distribution is still a power law , but with a much smaller value for the highest frequency .\nWhen discussing the effectiveness of dynamically caching , an important metric is cache miss rate .\nTo analyze the cache miss rate for different memory constraints , we use the working set model [ 6 , 14 ] .\nA working set , informally , is the set of references that an application or an operating system is currently working with .\nThe model uses such sets in a strategy that tries to capture the temporal locality of references .\nThe working set strategy then consists in keeping in memory only the elements that are referenced in the previous 0 steps of the input sequence , where 0 is a configurable parameter corresponding to the window size .\nOriginally , working sets have been used for page replacement algorithms of operating systems , and considering such a strategy in the context of search engines is interesting for three reasons .\nFirst , it captures the amount of locality of queries and terms in a sequence of queries .\nLocality in this case refers to the frequency of queries and terms in a window of time .\nIf many queries appear multiple times in a window , then locality is high .\nSecond , it enables an offline analysis of the expected miss rate given different memory constraints .\nThird , working sets capture aspects of efficient caching algorithms such as LRU .\nLRU assumes that references farther in the past are less likely to be referenced in the present , which is implicit in the concept of working sets [ 14 ] .\nFigure 6 plots the miss rate for different working set sizes , and we consider working sets of both queries and terms .\nThe working set sizes are normalized against the total number of queries in the query log .\nIn the graph for queries , there is a sharp decay until approximately 0.01 , and the rate at which the miss rate drops decreases as we increase the size of the working set over 0.01 .\nFinally , the minimum value it reaches is 50 % miss rate , not shown in the figure as we have cut the tail of the curve for presentation purposes .\nFigure 6 : Miss rate as a function of the working set size .\nFigure 7 : Distribution of distances expressed in terms of distinct queries .\nCompared to the query curve , we observe that the minimum miss rate for terms is substantially smaller .\nThe miss rate also drops sharply on values up to 0.01 , and it decreases minimally for higher values .\nThe minimum value , however , is slightly over 10 % , which is much smaller than the minimum value for the sequence of queries .\nThis implies that with such a policy it is possible to achieve over 80 % hit rate , if we consider caching dynamically posting lists for terms as opposed to caching answers for queries .\nThis result does not consider the space required for each unit stored in the cache memory , or the amount of time it takes to put together a response to a user query .\nWe analyze these issues more carefully later in this paper .\nIt is interesting also to observe the histogram of Figure 7 , which is an intermediate step in the computation of the miss rate graph .\nIt reports the distribution of distances between repetitions of the same frequent query .\nThe distance in the plot is measured in the number of distinct queries separating a query and its repetition , and it considers only queries appearing at least 10 times .\nFrom Figures 6 and 7 , we conclude that even if we set the size of the query answers cache to a relatively large number of entries , the miss rate is high .\nThus , caching the posting lists of terms has the potential to improve the hit ratio .\nThis is what we explore next .\n5 .\nCACHING POSTING LISTS\nThe previous section shows that caching posting lists can obtain a higher hit rate compared to caching query answers .\nIn this section we study the problem of how to select post\ning lists to place on a certain amount of available memory , assuming that the whole index is larger than the amount of memory available .\nThe posting lists have variable size ( in fact , their size distribution follows a power law ) , so it is beneficial for a caching policy to consider the sizes of the posting lists .\nWe consider both dynamic and static caching .\nFor dynamic caching , we use two well-known policies , LRU and LFU , as well as a modified algorithm that takes posting-list size into account .\nBefore discussing the static caching strategies , we introduce some notation .\nWe use fq ( t ) to denote the query-term frequency of a term t , that is , the number of queries containing t in the query log , and fd ( t ) to denote the document frequency of t , that is , the number of documents in the collection in which the term t appears .\nThe first strategy we consider is the algorithm proposed by Baeza-Yates and Saint-Jean [ 2 ] , which consists in selecting the posting lists of the terms with the highest query-term frequencies fq ( t ) .\nWe call this algorithm QTF .\nWe observe that there is a trade-off between fq ( t ) and fd ( t ) .\nTerms with high fq ( t ) are useful to keep in the cache because they are queried often .\nOn the other hand , terms with high fd ( t ) are not good candidates because they correspond to long posting lists and consume a substantial amount of space .\nIn fact , the problem of selecting the best posting lists for the static cache corresponds to the standard KNAPSACK problem : given a knapsack of fixed capacity , and a set of n items , such as the i-th item has value ci and size si , select the set of items that fit in the knapsack and maximize the overall value .\nIn our case , `` value '' corresponds to fq ( t ) and `` size '' corresponds to fd ( t ) .\nThus , we employ a simple algorithm for the knapsack problem , which is selecting the posting lists of the terms with the highest values of the ratio fq ( t ) fd ( t ) .\nWe call this algorithm QTFDF .\nWe tried other variations considering query frequencies instead of term frequencies , but the gain was minimal compared to the complexity added .\nIn addition to the above two static algorithms we consider the following algorithms for dynamic caching :\n\u2022 LRU : A standard LRU algorithm , but many posting lists might need to be evicted ( in order of least-recent usage ) until there is enough space in the memory to place the currently accessed posting list ; \u2022 LFU : A standard LFU algorithm ( eviction of the leastfrequently used ) , with the same modification as the LRU ; \u2022 DYN-QTFDF : A dynamic version of the QTFDF algorithm ; evict from the cache the term ( s ) with the lowest\nThe performance of all the above algorithms for 15 weeks of the query log and the UK dataset are shown in Figure 8 .\nPerformance is measured with hit rate .\nThe cache size is measured as a fraction of the total space required to store the posting lists of all terms .\nFor the dynamic algorithms , we load the cache with terms in order of fq ( t ) and we let the cache `` warm up '' for 1 million queries .\nFor the static algorithms , we assume complete knowledge of the frequencies fq ( t ) , that is , we estimate fq ( t ) from the whole query stream .\nAs we show in Section 7 the results do not change much if we compute the query-term frequencies using the first 3 or 4 weeks of the query log and measure the hit rate on the rest .\nFigure 8 : Hit rate of different strategies for caching posting lists .\nThe most important observation from our experiments is that the static QTFDF algorithm has a better hit rate than all the dynamic algorithms .\nAn important benefit a static cache is that it requires no eviction and it is hence more efficient when evaluating queries .\nHowever , if the characteristics of the query traffic change frequently over time , then it requires re-populating the cache often or there will be a significant impact on hit rate .\n6 .\nANALYSIS OF STATIC CACHING\nIn this section we provide a detailed analysis for the problem of deciding whether it is preferable to cache query answers or cache posting lists .\nOur analysis takes into account the impact of caching between two levels of the data-access hierarchy .\nIt can either be applied at the memory/disk layer or at a server/remote server layer as in the architecture we discussed in the introduction .\nUsing a particular system model , we obtain estimates for the parameters required by our analysis , which we subsequently use to decide the optimal trade-off between caching query answers and caching posting lists .\n6.1 Analytical Model\nLet M be the size of the cache measured in answer units ( the cache can store M query answers ) .\nAssume that all posting lists are of the same length L , measured in answer units .\nWe consider the following two cases : ( A ) a cache that stores only precomputed answers , and ( B ) a cache that stores only posting lists .\nIn the first case , Nc = M answers fit in the cache , while in the second case Np = M/L posting lists fit in the cache .\nThus , Np = Nc/L .\nNote that although posting lists require more space , we can combine terms to evaluate more queries ( or partial queries ) .\nFor case ( A ) , suppose that a query answer in the cache can be evaluated in 1 time unit .\nFor case ( B ) , assume that if the posting lists of the terms of a query are in the cache then the results can be computed in TR1 time units , while if the posting lists are not in the cache then the results can be computed in TR2 time units .\nOf course TR2 > TR1 .\nNow we want to compare the time to answer a stream of Q queries in both cases .\nLet Vc ( Nc ) be the volume of the most frequent Nc queries .\nThen , for case ( A ) , we have an overall time\nSimilarly , for case ( B ) , let Vp ( Np ) be the number of com\nputable queries .\nThen we have overall time\nWe want to check under which conditions we have TPL < TCA .\nWe have\nFigure 9 shows the values of Vp and Vc for our data .\nWe can see that caching answers saturates faster and for this particular data there is no additional benefit from using more than 10 % of the index space for caching answers .\nAs the query distribution is a power law with parameter \u03b1 > 1 , the i-th most frequent query appears with probability proportional to i\u03b11 .\nTherefore , the volume Vc ( n ) , which is the total number of the n most frequent queries , is\nFigure 9 : Cache saturation as a function of size .\nanswers ( centralized and distributed case ) .\nWe know that Vp ( n ) grows faster than Vc ( n ) and assume , based on experimental results , that the relation is of the form Vp ( n ) = k Vc ( n ) \u03b2 .\nIn the worst case , for a large cache , \u03b2 -- + 1 .\nThat is , both techniques will cache a constant fraction of the overall query volume .\nThen caching posting lists makes sense only if\nIf we use compression , we have L ~ < L and TR ~ 1 > TR1 .\nAccording to the experiments that we show later , compression is always better .\nFor a small cache , we are interested in the transient behavior and then \u03b2 > 1 , as computed from our data .\nIn this case there will always be a point where TPL > TCA for a large number of queries .\nIn reality , instead of filling the cache only with answers or only with posting lists , a better strategy will be to divide the total cache space into cache for answers and cache for posting lists .\nIn such a case , there will be some queries that could be answered by both parts of the cache .\nAs the answer cache is faster , it will be the first choice for answering those queries .\nLet QN .\nand QN , , be the set of queries that can be answered by the cached answers and the cached posting lists , respectively .\nThen , the overall time is\nwhere Np = ( M -- Nc ) / L. Finding the optimal division of the cache in order to minimize the overall retrieval time is a difficult problem to solve analytically .\nIn Section 6.3 we use simulations to derive optimal cache trade-offs for particular implementation examples .\n6.2 Parameter Estimation\nWe now use a particular implementation of a centralized system and the model of a distributed system as examples from which we estimate the parameters of the analysis from the previous section .\nWe perform the experiments using an optimized version of Terrier [ 11 ] for both indexing documents and processing queries , on a single machine with a Pentium 4 at 2GHz and 1GB of RAM .\nWe indexed the documents from the UK-2006 dataset , without removing stop words or applying stemming .\nThe posting lists in the inverted file consist of pairs of document identifier and term frequency .\nWe compress the document identifier gaps using Elias gamma encoding , and the\nterm frequencies in documents using unary encoding [ 16 ] .\nThe size of the inverted file is 1,189 Mb .\nA stored answer requires 1264 bytes , and an uncompressed posting takes 8 bytes .\nFrom Table 1 , we obtain L = ( 8 \u00b7 # of postings ) 1264 \u00b7 # of terms = 0.75 and L ~ = Inverted file size 1264 \u00b7 # of terms = 0.26 .\nWe estimate the ratio TR = T/Tc between the average time T it takes to evaluate a query and the average time Tc it takes to return a stored answer for the same query , in the following way .\nTc is measured by loading the answers for 100,000 queries in memory , and answering the queries from memory .\nThe average time is Tc = 0.069 ms. T is measured by processing the same 100,000 queries ( the first 10,000 queries are used to warm-up the system ) .\nFor each query , we remove stop words , if there are at least three remaining terms .\nThe stop words correspond to the terms with a frequency higher than the number of documents in the index .\nWe use a document-at-a-time approach to retrieve documents containing all query terms .\nThe only disk access required during query processing is for reading compressed posting lists from the inverted file .\nWe perform both full and partial evaluation of answers , because some queries are likely to retrieve a large number of documents , and only a fraction of the retrieved documents will be seen by users .\nIn the partial evaluation of queries , we terminate the processing after matching 10,000 documents .\nThe estimated ratios TR are presented in Table 2 .\nFigure 10 shows for a sample of queries the workload of the system with partial query evaluation and compressed posting lists .\nThe x-axis corresponds to the total time the tical axis corresponds to the sum E system spends processing a particular query , and the vert \u2208 q fq \u2022 fd ( t ) .\nNotice that the total number of postings of the query-terms does not necessarily provide an accurate estimate of the workload imposed on the system by a query ( which is the case for full evaluation and uncompressed lists ) .\nFigure 10 : Workload for partial query evaluation with compressed posting lists .\nThe analysis of the previous section also applies to a distributed retrieval system in one or multiple sites .\nSuppose that a document partitioned distributed system is running on a cluster of machines interconnected with a local area network ( LAN ) in one site .\nThe broker receives queries and broadcasts them to the query processors , which answer the queries and return the results to the broker .\nFinally , the broker merges the received answers and generates the final set of answers ( we assume that the time spent on merging results is negligible ) .\nThe difference between the centralized architecture and the document partition architecture is the extra communication between the broker and the query processors .\nUsing ICMP pings on a 100Mbps LAN , we have measured that sending the query from the broker to the query processors which send an answer of 4,000 bytes back to the broker takes on average 0.615 ms. Hence , TRL = TR + 0.615 ms/0 .069 ms = TR + 9 .\nIn the case when the broker and the query processors are in different sites connected with a wide area network ( WAN ) , we estimated that broadcasting the query from the broker to the query processors and getting back an answer of 4,000 bytes takes on average 329ms .\nHence , TRW = TR + 329ms/0 .069 ms = TR + 4768 .\n6.3 Simulation Results\nWe now address the problem of finding the optimal tradeoff between caching query answers and caching posting lists .\nTo make the problem concrete we assume a fixed budget M on the available memory , out of which x units are used for caching query answers and M \u2212 x for caching posting lists .\nWe perform simulations and compute the average response time as a function of x. Using a part of the query log as training data , we first allocate in the cache the answers to the most frequent queries that fit in space x , and then we use the rest of the memory to cache posting lists .\nFor selecting posting lists we use the QTFDF algorithm , applied to the training query log but excluding the queries that have already been cached .\nIn Figure 11 , we plot the simulated response time for a centralized system as a function of x. For the uncompressed index we use M = 1GB , and for the compressed index we use M = 0.5 GB .\nIn the case of the configuration that uses partial query evaluation with compressed posting lists , the lowest response time is achieved when 0.15 GB out of the 0.5 GB is allocated for storing answers for queries .\nWe obtained similar trends in the results for the LAN setting .\nFigure 12 shows the simulated workload for a distributed system across a WAN .\nIn this case , the total amount of memory is split between the broker , which holds the cached\nFigure 11 : Optimal division of the cache in a server .\nFigure 12 : Optimal division of the cache when the next level requires WAN access .\nanswers of queries , and the query processors , which hold the cache of posting lists .\nAccording to the figure , the difference between the configurations of the query processors is less important because the network communication overhead increases the response time substantially .\nWhen using uncompressed posting lists , the optimal allocation of memory corresponds to using approximately 70 % of the memory for caching query answers .\nThis is explained by the fact that there is no need for network communication when the query can be answered by the cache at the broker .\n7 .\nEFFECT OF THE QUERY DYNAMICS\nFor our query log , the query distribution and query-term distribution change slowly over time .\nTo support this claim , we first assess how topics change comparing the distribution of queries from the first week in June , 2006 , to the distribution of queries for the remainder of 2006 that did not appear in the first week in June .\nWe found that a very small percentage of queries are new queries .\nThe majority of queries that appear in a given week repeat in the following weeks for the next six months .\nWe then compute the hit rate of a static cache of 128 , 000 answers trained over a period of two weeks ( Figure 13 ) .\nWe report hit rate hourly for 7 days , starting from 5pm .\nWe observe that the hit rate reaches its highest value during the night ( around midnight ) , whereas around 2-3pm it reaches its minimum .\nAfter a small decay in hit rate values , the hit rate stabilizes between 0.28 , and 0.34 for the entire week , suggesting that the static cache is effective for a whole week after the training period .\nFigure 13 : Hourly hit rate for a static cache holding 128,000 answers during the period of a week .\nThe static cache of posting lists can be periodically recomputed .\nTo estimate the time interval in which we need to recompute the posting lists on the static cache we need to consider an efficiency/quality trade-off : using too short a time interval might be prohibitively expensive , while recomputing the cache too infrequently might lead to having an obsolete cache not corresponding to the statistical characteristics of the current query stream .\nWe measured the effect on the QTFDF algorithm of the changes in a 15-week query stream ( Figure 14 ) .\nWe compute the query term frequencies over the whole stream , select which terms to cache , and then compute the hit rate on the whole query stream .\nThis hit rate is as an upper bound , and it assumes perfect knowledge of the query term frequencies .\nTo simulate a realistic scenario , we use the first 6 ( 3 ) weeks of the query stream for computing query term frequencies and the following 9 ( 12 ) weeks to estimate the hit rate .\nAs Figure 14 shows , the hit rate decreases by less than 2 % .\nThe high correlation among the query term frequencies during different time periods explains the graceful adaptation of the static caching algorithms to the future query stream .\nIndeed , the pairwise correlation among all possible 3-week periods of the 15-week query stream is over 99.5 % .\n8 .\nCONCLUSIONS\nCaching is an effective technique in search engines for improving response time , reducing the load on query processors , and improving network bandwidth utilization .\nWe present results on both dynamic and static caching .\nDynamic caching of queries has limited effectiveness due to the high number of compulsory misses caused by the number of unique or infrequent queries .\nOur results show that in our UK log , the minimum miss rate is 50 % using a working set strategy .\nCaching terms is more effective with respect to miss rate , achieving values as low as 12 % .\nWe also propose a new algorithm for static caching of posting lists that outperforms previous static caching algorithms as well as dynamic algorithms such as LRU and LFU , obtaining hit rate values that are over 10 % higher compared these strategies .\nWe present a framework for the analysis of the trade-off between caching query results and caching posting lists , and we simulate different types of architectures .\nOur results show that for centralized and LAN environments , there is an optimal allocation of caching query results and caching of posting lists , while for WAN scenarios in which network time prevails it is more important to cache query results .\nFigure 14 : Impact of distribution changes on the static caching of posting lists ."} {"id": "H-12", "title": "", "abstract": "", "keyphrases": ["search engin", "snippet gener", "document cach", "link graph measur", "perform", "web summari", "special-purpos filesystem", "ram", "document compact", "text fragment", "precomput final result page", "vbyte code scheme", "semi-static compress", "document cach"], "prmu": [], "lvl-1": "Fast Generation of Result Snippets in Web Search Andrew Turpin & Yohannes Tsegay RMIT University Melbourne, Australia aht@cs.rmit.edu.au ytsegay@cs.rmit.edu.au David Hawking CSIRO ICT Centre Canberra, Australia david.hawking@acm.org Hugh E. Williams Microsoft Corporation One Microsoft Way Redmond, WA.\nhughw@microsoft.com ABSTRACT The presentation of query biased document snippets as part of results pages presented by search engines has become an expectation of search engine users.\nIn this paper we explore the algorithms and data structures required as part of a search engine to allow efficient generation of query biased snippets.\nWe begin by proposing and analysing a document compression method that reduces snippet generation time by 58% over a baseline using the zlib compression library.\nThese experiments reveal that finding documents on secondary storage dominates the total cost of generating snippets, and so caching documents in RAM is essential for a fast snippet generation process.\nUsing simulation, we examine snippet generation performance for different size RAM caches.\nFinally we propose and analyse document reordering and compaction, revealing a scheme that increases the number of document cache hits with only a marginal affect on snippet quality.\nThis scheme effectively doubles the number of documents that can fit in a fixed size cache.\nCategories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval; H.3.4 [Information Storage and Retrieval]: Systems and Software-performance evaluation (efficiency and effectiveness); General Terms Algorithms, Experimentation, Measurement, Performance 1.\nINTRODUCTION Each result in search results list delivered by current WWW search engines such as search.yahoo.com, google.com and search.msn.com typically contains the title and URL of the actual document, links to live and cached versions of the document and sometimes an indication of file size and type.\nIn addition, one or more snippets are usually presented, giving the searcher a sneak preview of the document contents.\nSnippets are short fragments of text extracted from the document content (or its metadata).\nThey may be static (for example, always show the first 50 words of the document, or the content of its description metadata, or a description taken from a directory site such as dmoz.org) or query-biased [20].\nA query-biased snippet is one selectively extracted on the basis of its relation to the searcher``s query.\nThe addition of informative snippets to search results may substantially increase their value to searchers.\nAccurate snippets allow the searcher to make good decisions about which results are worth accessing and which can be ignored.\nIn the best case, snippets may obviate the need to open any documents by directly providing the answer to the searcher``s real information need, such as the contact details of a person or an organization.\nGeneration of query-biased snippets by Web search engines indexing of the order of ten billion web pages and handling hundreds of millions of search queries per day imposes a very significant computational load (remembering that each search typically generates ten snippets).\nThe simpleminded approach of keeping a copy of each document in a file and generating snippets by opening and scanning files, works when query rates are low and collections are small, but does not scale to the degree required.\nThe overhead of opening and reading ten files per query on top of accessing the index structure to locate them, would be manifestly excessive under heavy query load.\nEven storing ten billion files and the corresponding hundreds of terabytes of data is beyond the reach of traditional filesystems.\nSpecial-purpose filesystems have been built to address these problems [6].\nNote that the utility of snippets is by no means restricted to whole-of-Web search applications.\nEfficient generation of snippets is also important at the scale of whole-of-government search services such as www.firstgov.gov (c. 25 million pages) and govsearch.australia.gov.au (c. 5 million pages) and within large enterprises such as IBM [2] (c. 50 million pages).\nSnippets may be even more useful in database or filesystem search applications in which no useful URL or title information is present.\nWe present a new algorithm and compact single-file structure designed for rapid generation of high quality snippets and compare its space/time performance against an obvious baseline based on the zlib compressor on various data sets.\nWe report the proportion of time spent for disk seeks, disk reads and cpu processing; demonstrating that the time for locating each document (seek time) dominates, as expected.\nAs the time to process a document in RAM is small in comparison to locating and reading the document into memory, it may seem that compression is not required.\nHowever, this is only true if there is no caching of documents in RAM.\nControlling the RAM of physical systems for experimentation is difficult, hence we use simulation to show that caching documents dramatically improves the performance of snippet generation.\nIn turn, the more documents can be compressed, the more can fit in cache, and hence the more disk seeks can be avoided: the classic data compression tradeoff that is exploited in inverted file structures and computing ranked document lists [24].\nAs hitting the document cache is important, we examine document compaction, as opposed to compression, schemes by imposing an a priori ordering of sentences within a document, and then only allowing leading sentences into cache for each document.\nThis leads to further time savings, with only marginal impact on the quality of the snippets returned.\n2.\nRELATED WORK Snippet generation is a special type of extractive document summarization, in which sentences, or sentence fragments, are selected for inclusion in the summary on the basis of the degree to which they match the search query.\nThis process was given the name of query-biased summarization by Tombros and Sanderson [20] The reader is referred to Mani [13] and to Radev et al. [16] for overviews of the very many different applications of summarization and for the equally diverse methods for producing summaries.\nEarly Web search engines presented query-independent snippets consisting of the first k bytes of the result document.\nGenerating these is clearly much simpler and much less computationally expensive than processing documents to extract query biased summaries, as there is no need to search the document for text fragments containing query terms.\nTo our knowledge, Google was the first whole-ofWeb search engine to provide query biased summaries, but summarization is listed by Brin and Page [1] only under the heading of future work.\nMost of the experimental work using query-biased summarization has focused on comparing their value to searchers relative to other types of summary [20, 21], rather than efficient generation of summaries.\nDespite the importance of efficient summary generation in Web search, few algorithms appear in the literature.\nSilber and McKoy [19] describe a linear-time lexical chaining algorithm for use in generic summaries, but offer no empirical data for the performance of their algorithm.\nWhite et al [21] report some experimental timings of their WebDocSum system, but the snippet generation algorithms themselves are not isolated, so it is difficult to infer snippet generation time comparable to the times we report in this paper.\n3.\nSEARCH ENGINE ARCHITECTURES A search engine must perform a variety of activities, and is comprised of many sub-systems, as depicted by the boxes in Figure 1.\nNote that there may be several other sub-systems such as the Advertising Engine or the Parsing Engine that could easily be added to the diagram, but we have concentrated on the sub-systems that are relevant to snippet generation.\nDepending on the number of documents that the search engine indexes, the data and processes for each Ranking Engine Crawling Engine Indexing Engine Engine Lexicon Meta Data Engine Engine Snippet Term&Doc#s Snippetperdoc WEB Query Engine Query Results Page Term#s Doc#s Invertedlists Docs perdoc Title,URL,etc Doc#s Document meta data Terms Querystring Term#s Figure 1: An abstraction of some of the sub-systems in a search engine.\nDepending on the number of documents indexed, each sub-system could reside on a single machine, be distributed across thousands of machines, or a combination of both.\nsub-system could be distributed over many machines, or all occupy a single server and filesystem, competing with each other for resources.\nSimilarly, it may be more efficient to combine some sub-systems in an implementation of the diagram.\nFor example, the meta-data such as document title and URL requires minimal computation apart from highlighting query words, but we note that disk seeking is likely to be minimized if title, URL and fixed summary information is stored contiguously with the text from which query biased summaries are extracted.\nHere we ignore the fixed text and consider only the generation of query biased summaries: we concentrate on the Snippet Engine.\nIn addition to data and programs operating on that data, each sub-system also has its own memory management scheme.\nThe memory management system may simply be the memory hierarchy provided by the operating system used on machines in the sub-system, or it may be explicitly coded to optimise the processes in the sub-system.\nThere are many papers on caching in search engines (see [3] and references therein for a current summary), but it seems reasonable that there is a query cache in the Query Engine that stores precomputed final result pages for very popular queries.\nWhen one of the popular queries is issued, the result page is fetched straight from the query cache.\nIf the issued query is not in the query cache, then the Query Engine uses the four sub-systems in turn to assemble a results page.\n1.\nThe Lexicon Engine maps query terms to integers.\n2.\nThe Ranking Engine retrieves inverted lists for each term, using them to get a ranked list of documents.\n3.\nThe Snippet Engine uses those document numbers and query term numbers to generate snippets.\n4.\nThe Meta Data Engine fetches other information about each document to construct the results page.\nIN A document broken into one sentence per line, and a sequence of query terms.\n1 For each line of the text, L = [w1, w2, ... , wm] 2 Let h be 1 if L is a heading, 0 otherwise.\n3 Let be 2 if L is the first line of a document, 1 if it is the second line, 0 otherwise.\n4 Let c be the number of wi that are query terms, counting repetitions.\n5 Let d be the number of distinct query terms that match some wi.\n6 Identify the longest contiguous run of query terms in L, say wj ... wj+k. 7 Use a weighted combination of c, d, k, h and to derive a score s. 8 Insert L into a max-heap using s as the key.\nOUT Remove the number of sentences required from the heap to form the summary.\nFigure 2: Simple sentence ranker that operates on raw text with one sentence per line.\n4.\nTHE SNIPPET ENGINE For each document identifier passed to the Snippet Engine, the engine must generate text, preferably containing query terms, that attempts to summarize that document.\nPrevious work on summarization identifies the sentence as the minimal unit for extraction and presentation to the user [12].\nAccordingly, we also assume a web snippet extraction process will extract sentences from documents.\nIn order to construct a snippet, all sentences in a document should be ranked against the query, and the top two or three returned as the snippet.\nThe scoring of sentences against queries has been explored in several papers [7, 12, 18, 20, 21], with different features of sentences deemed important.\nBased on these observations, Figure 2, shows the general algorithm for scoring sentences in relevant documents, with the highest scoring sentences making the snippet for each document.\nThe final score of a sentence, assigned in Step 7, can be derived in many different ways.\nIn order to avoid bias towards any particular scoring mechanism, we compare sentence quality later in the paper using the individual components of the score, rather than an arbitrary combination of the components.\n4.1 Parsing Web Documents Unlike well edited text collections that are often the target for summarization systems, Web data is often poorly structured, poorly punctuated, and contains a lot of data that do not form part of valid sentences that would be candidates for parts of snippets.\nWe assume that the documents passed to the Snippet Engine by the Indexing Engine have all HTML tags and JavaScript removed, and that each document is reduced to a series of word tokens separated by non-word tokens.\nWe define a word token as a sequence of alphanumeric characters, while a non-word is a sequence of non-alphanumeric characters such as whitespace and the other punctuation symbols.\nBoth are limited to a maximum of 50 characters.\nAdjacent, repeating characters are removed from the punctuation.\nIncluded in the punctuation set is a special end of sentence marker which replaces the usual three sentence terminators ?!\n.\n.\nOften these explicit punctuation characters are missing, and so HTML tags such as
and

are assumed to terminate sentences.\nIn addition, a sentence must contain at least five words and no more than twenty words, with longer or shorter sentences being broken and joined as required to meet these criteria [10].\nUnterminated HTML tags-that is, tags with an open brace, but no close brace-cause all text from the open brace to the next open brace to be discarded.\nA major problem in summarizing web pages is the presence of large amounts of promotional and navigational material (navbars) visually above and to the left of the actual page content.\nFor example, The most wonderful company on earth.\nProducts.\nService.\nAbout us.\nContact us.\nTry before you buy.\nSimilar, but often not identical, navigational material is typically presented on every page within a site.\nThis material tends to lower the quality of summaries and slow down summary generation.\nIn our experiments we did not use any particular heuristics for removing navigational information as the test collection in use (wt100g) pre-dates the widespread take up of the current style of web publishing.\nIn wt100g, the average web page size is more than half the current Web average [11].\nAnecdotally, the increase is due to inclusion of sophisticated navigational and interface elements and the JavaScript functions to support them.\nHaving defined the format of documents that are presented to the Snippet Engine, we now define our Compressed Token System (CTS) document storage scheme, and the baseline system used for comparison.\n4.2 Baseline Snippet Engine An obvious document representation scheme is to simply compress each document with a well known adaptive compressor, and then decompress the document as required [1], using a string matching algorithm to effect the algorithm in Figure 2.\nAccordingly, we implemented such a system, using zlib [4] with default parameters to compress every document after it has been parsed as in Section 4.1.\nEach document is stored in a single file.\nWhile manageable for our small test collections or small enterprises with millions of documents, a full Web search engine may require multiple documents to inhabit single files, or a special purpose filesystem [6].\nFor snippet generation, the required documents are decompressed one at a time, and a linear search for provided query terms is employed.\nThe search is optimized for our specific task, which is restricted to matching whole words and the sentence terminating token, rather than general pattern matching.\n4.3 The CTS Snippet Engine Several optimizations over the baseline are possible.\nThe first is to employ a semi-static compression method over the entire document collection, which will allow faster decompression with minimal compression loss [24].\nUsing a semistatic approach involves mapping words and non-words produced by the parser to single integer tokens, with frequent symbols receiving small integers, and then choosing a coding scheme that assigns small numbers a small number of bits.\nWords and non-words strictly alternate in the compressed file, which always begins with a word.\nIn this instance we simply assign each symbol its ordinal number in a list of symbols sorted by frequency.\nWe use the vbyte coding scheme to code the word tokens [22].\nThe set of non-words is limited to the 64 most common punctuation sequences in the collection itself, and are encoded with a flat 6-bit binary code.\nThe remaining 2 bits of each punctuation symbol are used to store capitalization information.\nThe process of computing the semi-static model is complicated by the fact that the number of words and non-words appearing in large web collections is high.\nIf we stored all words and non-words appearing in the collection, and their associated frequency, many gigabytes of RAM or a B-tree or similar on-disk structure would be required [23].\nMoffat et al. [14] have examined schemes for pruning models during compression using large alphabets, and conclude that rarely occurring terms need not reside in the model.\nRather, rare terms are spelt out in the final compressed file, using a special word token (escape symbol), to signal their occurrence.\nDuring the first pass of encoding, two move-to-front queues are kept; one for words and one for non-words.\nWhenever the available memory is consumed and a new symbol is discovered in the collection, an existing symbol is discarded from the end of the queue.\nIn our implementation, we enforce the stricter condition on eviction that, where possible, the evicted symbol should have a frequency of one.\nIf there is no symbol with frequency one in the last half of the queue, then we evict symbols of frequency two, and so on until enough space is available in the model for the new symbol.\nThe second pass of encoding replaces each word with its vbyte encoded number, or the escape symbol and an ASCII representation of the word if it is not in the model.\nSimilarly each non-word sequence is replaced with its codeword, or the codeword for a single space character if it is not in the model.\nWe note that this lossless compression of non-words is acceptable when the documents are used for snippet generation, but may not be acceptable for a document database.\nWe assume that a separate sub-system would hold cached documents for other purposes where exact punctuation is important.\nWhile this semi-static scheme should allow faster decompression than the baseline, it also readily allows direct matching of query terms as compressed integers in the compressed file.\nThat is, sentences can be scored without having to decompress a document, and only the sentences returned as part of a snippet need to be decoded.\nThe CTS system stores all documents contiguously in one file, and an auxiliary table of 64 bit integers indicating the start offset of each document in the file.\nFurther, it must have access to the reverse mapping of term numbers, allowing those words not spelt out in the document to be recovered and returned to the Query Engine as strings.\nThe first of these data structures can be readily partitioned and distributed if the Snippet Engine occupies multiple machines; the second, however, is not so easily partitioned, as any document on a remote machine might require access to the whole integer-to-string mapping.\nThis is the second reason for employing the model pruning step during construction of the semi-static code: it limits the size of the reverse mapping table that should be present on every machine implementing the Snippet Engine.\n4.4 Experimental assessment of CTS All experiments reported in this paper were run on a Sun Fire V210 Server running Solaris 10.\nThe machine consists of dual 1.34 GHz UltraSPARC IIIi processors and 4GB of wt10g wt50g wt100g No.\nDocs.\n(\u00d7106 ) 1.7 10.1 18.5 Raw Text 10, 522 56, 684 102, 833 Baseline(zlib) 2, 568 (24%) 10, 940 (19%) 19, 252 (19%) CTS 2, 722 (26%) 12, 010 (21%) 22, 269 (22%) Table 1: Total storage space (Mb) for documents for the three test collections both compressed, and uncompressed.\n0 20 40 60 0.00.20.40.60.8 Queries grouped in 100``s Time(seconds) 0 20 40 60 0.00.20.40.60.8 Queries grouped in 100``s Time(seconds) 0 20 40 60 0.00.20.40.60.8 Queries grouped in 100``s Time(seconds) Baseline CTS with caching CTS without caching Figure 3: Time to generate snippets for 10 documents per query, averaged over buckets of 100 queries, for the first 7000 Excite queries on wt10g.\nRAM.\nAll source code was compiled using gcc4.1.1 with -O9 optimisation.\nTimings were run on an otherwise unoccupied machine and were averaged over 10 runs, with memory flushed between runs to eliminate any caching of data files.\nIn the absence of evidence to the contrary, we assume that it is important to model realistic query arrival sequences and the distribution of query repetitions for our experiments.\nConsequently, test collections which lack real query logs, such as TREC ad-hoc and .\nGOV2 were not considered suitable.\nObtaining extensive query logs and associated result doc-ids for a corresponding large collection is not easy.\nWe have used two collections (wt10g and wt100g) from the TREC Web Track [8] coupled with queries from Excite logs from the same (c. 1997) period.\nFurther, we also made use of a medium sized collection wt50g, obtained by randomly sampling half of the documents from wt100g.\nThe first two rows of Table 1 give the number of documents and the size in Mb of these collections.\nThe final two rows of Table 1 show the size of the resulting document sets after compression with the baseline and CTS schemes.\nAs expected, CTS admits a small compression loss over zlib, but both substantially reduce the size of the text to about 20% of the original, uncompressed size.\nNote that the figures for CTS do not include the reverse mapping from integer token to string that is required to produce the final snippets as that occupies RAM.\nIt is 1024 Mb in these experiments.\nThe Zettair search engine [25] was used to produce a list of documents to summarize for each query.\nFor the majority of the experiments the Okapi BM25 scoring scheme was used to determine document rankings.\nFor the static caching experiments reported in Section 5, the score of each document wt10g wt50g wt100g Baseline 75\u00a0157\u00a0183 CTS 38 70 77 Reduction in time 49% 56% 58% Table 2: Average time (msec) for the final 7000 queries in the Excite logs using the baseline and CTS systems on the 3 test collections.\nis a 50:50 weighted average of the BM25 score (normalized by the top scoring document for each query) and a score for each document independent of any query.\nThis is to simulate effects of ranking algorithms like PageRank [1] on the distribution of document requests to the Snippet Engine.\nIn our case we used the normalized Access Count [5] computed from the top 20 documents returned to the first 1 million queries from the Excite log to determine the query independent score component.\nPoints on Figure 3 indicate the mean running time to generate 10 snippets for each query, averaged in groups of 100 queries, for the first 7000 queries in the Excite query log.\nOnly the data for wt10g is shown, but the other collections showed similar patterns.\nThe x-axis indicates the group of 100 queries; for example, 20 indicates the queries 2001 to 2100.\nClearly there is a caching effect, with times dropping substantially after the first 1000 or so queries are processed.\nAll of this is due to the operating system caching disk blocks and perhaps pre-fetching data ahead of specific read requests.\nThis is evident because the baseline system has no large internal data structures to take advantage of non-disk based caching, it simply opens and processes files, and the speed up is evident for the baseline system.\nPart of this gain is due to the spatial locality of disk references generated by the query stream: repeated queries will already have their document files cached in memory; and similarly different queries that return the same documents will benefit from document caching.\nBut when the log is processed after removing all but the first request for each document, the pronounced speed-up is still evident as more queries are processed (not shown in figure).\nThis suggests that the operating system (or the disk itself) is reading and buffering a larger amount of data than the amount requested and that this brings benefit often enough to make an appreciable difference in snippet generation times.\nThis is confirmed by the curve labeled CTS without caching, which was generated after mounting the filesystem with a no-caching option (directio in Solaris).\nWith disk caching turned off, the average time to generate snippets varies little.\nThe time to generate ten snippets for a query, averaged over the final 7000 queries in the Excite log as caching effects have dissipated, are shown in Table 2.\nOnce the system has stabilized, CTS is over 50% faster than the Baseline system.\nThis is primarily due to CTS matching single integers for most query words, rather than comparing strings in the baseline system.\nTable 3 shows a break down of the average time to generate ten snippets over the final 7000 queries of the Excite log on the wt50g collection when entire documents are processed, and when only the first half of each document is processed.\nAs can be seen, the majority of time spent generating a snippet is in locating the document on disk (Seek): 64% for whole documents, and 75% for half documents.\nEven if the amount of processing a document must % of doc processed Seek Read Score & Decode 100% 45 4 21 50% 45 4 11 Table 3: Time to generate 10 snippets for a single query (msec) for the wt50g collection averaged over the final 7000 Excite queries when either all of each document is processed (100%) or just the first half of each document (50%).\nundergo is halved, as in the second row of the Table, there is only a 14% reduction in the total time required to generate a snippet.\nAs locating documents in secondary storage occupies such a large proportion of snippet generation time, it seems logical to try and reduce its impact through caching.\n5.\nDOCUMENT CACHING In Section 3 we observed that the Snippet Engine would have its own RAM in proportion to the size of the document collection.\nFor example, on a whole-of-Web search engine, the Snippet Engine would be distributed over many workstations, each with at least 4 Gb of RAM.\nIn a small enterprise, the Snippet Engine may be sharing RAM with all other sub-systems on a single workstation, hence only have 100 Mb available.\nIn this section we use simulation to measure the number of cache hits in the Snippet Engine as memory size varies.\nWe compare two caching policies: a static cache, where the cache is loaded with as many documents as it can hold before the system begins answering queries, and then never changes; and a least-recently-used cache, which starts out as for the static cache, but whenever a document is accessed it moves to the front of a queue, and if a document is fetched from disk, the last item in the queue is evicted.\nNote that documents are first loaded into the caches in order of decreasing query independent score, which is computed as described in Section 4.4.\nThe simulations also assume a query cache exists for the top Q most frequent queries, and that these queries are never processed by the Snippet Engine.\nAll queries passed into the simulations are from the second half of the Excite query log (the first half being used to compute query independent scores), and are stemmed, stopped, and have their terms sorted alphabetically.\nThis final alteration simply allows queries such as red dog and dog red to return the same documents, as would be the case in a search engine where explicit phrase operators would be required in the query to enforce term order and proximity.\nFigure 4 shows the percentage of document access that hit cache using the two caching schemes, with Q either 0 or 10,000, on 535,276 Excite queries on wt100g.\nThe xaxis shows the percentage of documents that are held in the cache, so 1.0% corresponds to about 185,000 documents.\nFrom this figure it is clear that caching even a small percentage of the documents has a large impact on reducing seek time for snippet generation.\nWith 1% of documents cached, about 222 Mb for the wt100g collection, around 80% of disk seeks are avoided.\nThe static cache performs surprisingly well (squares in Figure 4), but is outperformed by the LRU cache (circles).\nIn an actual implementation of LRU, however, there may be fragmentation of the cache as documents are swapped in and out.\nThe reason for the large impact of the document cache is 0.0 0.5 1.0 1.5 2.0 2.5 3.0 020406080100 Cache size (% of collection) %ofaccessesascachehits LRU Q=0 LRU Q=10,000 Static Q=0 Static Q=10,000 Figure 4: Percentage of the time that the Snippet Engine does not have to go to disk in order to generate a snippet plotted against the size of the document cache as a percentage of all documents in the collection.\nResults are from a simulation on wt100g with 535,276 Excite queries.\nthat, for a particular collection, some documents are much more likely to appear in results lists than others.\nThis effect occurs partly because of the approximately Zipfian query frequency distribution, and partly because most Web search engines employ ranking methods which combine query based scores with static (a priori) scores determined from factors such as link graph measures, URL features, spam scores and so on [17].\nDocuments with high static scores are much more likely to be retrieved than others.\nIn addition to the document cache, the RAM of the Snippet Engine must also hold the CTS decoding table that maps integers to strings, which is capped by a parameter at compression time (1 Gb in our experiments here).\nThis is more than compensated for by the reduced size of each document, allowing more documents into the document cache.\nFor example, using CTS reduces the average document size from 5.7 Kb to 1.2 Kb (as shown in Table 1), so a 2 Gb RAM could hold 368,442 uncompressed documents (2% of the collection), or 850,691 documents plus a 1 Gb decompression table (5% of the collection).\nIn fact, further experimentation with the model size reveals that the model can in fact be very small and still CTS gives good compression and fast scoring times.\nThis is evidenced in Figure 5, where the compressed size of wt50g is shown in the solid symbols.\nNote that when no compression is used (Model Size is 0Mb), the collection is only 31 Gb as HTML markup, JavaScript, and repeated punctuation has been discarded as described in Section 4.1.\nWith a 5 Mb model, the collection size drops by more than half to 14 Gb, and increasing the model size to 750 Mb only elicits a 2 Gb drop in the collection size.\nFigure 5 also shows the average time to score and decode a a snippet (excluding seek time) with the different model sizes (open symbols).\nAgain, there is a large speed up when a 5 Mb model is used, but little 0 200\u00a0400\u00a0600 15202530 Model Size (Mb) CollectionSize(Gb)orTime(msec) Size (Gb) Time (msec) Figure 5: Collection size of the wt50g collection when compressed with CTS using different memory limits on the model, and the average time to generate single snippet excluding seek time on 20000 Excite queries using those models.\nimprovement with larger models.\nSimilar results hold for the wt100g collection, where a model of about 10 Mb offers substantial space and time savings over no model at all, but returns diminish as the model size increases.\nApart from compression, there is another approach to reducing the size of each document in the cache: do not store the full document in cache.\nRather store sentences that are likely to be used in snippets in the cache, and if during snippet generation on a cached document the sentence scores do not reach a certain threshold, then retrieve the whole document from disk.\nThis raises questions on how to choose sentences from documents to put in cache, and which to leave on disk, which we address in the next section.\n6.\nSENTENCE REORDERING Sentences within each document can be re-ordered so that sentences that are very likely to appear in snippets are at the front of the document, hence processed first at query time, while less likely sentences are relegated to the rear of the document.\nThen, during query time, if k sentences with a score exceeding some threshold are found before the entire document is processed, the remainder of the document is ignored.\nFurther, to improve caching, only the head of each document can be stored in the cache, with the tail residing on disk.\nNote that we assume that the search engine is to provide cached copies of a document-that is, the exact text of the document as it was indexed-then this would be serviced by another sub-system in Figure 1, and not from the altered copy we store in the Snippet Engine.\nWe now introduce four sentence reordering approaches.\n1.\nNatural order The first few sentences of a well authored document usually best describe the document content [12].\nThus simply processing a document in order should yield a quality snippet.\nUnfortunately, however, web documents are often not well authored, with little editorial or professional writing skills brought to bear on the creation of a work of literary merit.\nMore importantly, perhaps, is that we are producing query-biased snippets, and there is no guarantee that query terms will appear in sentences towards the front of a document.\n2.\nSignificant terms (ST) Luhn introduced the concept of a significant sentence as containing a cluster of significant terms [12], a concept found to work well by Tombros and Sanderson [20].\nLet fd,t be the frequency of term t in document d, then term t is determined to be significant if fd,t \u2265 8 < : 7 \u2212 0.1 \u00d7 (25 \u2212 sd), if sd < 25 7, if 25 \u2264 sd \u2264 40 7 + 0.1 \u00d7 (sd \u2212 40), otherwise, where sd is the number of sentences in document d.\nA bracketed section is defined as a group of terms where the leftmost and rightmost terms are significant terms, and no significant terms in the bracketed section are divided by more than four non-significant terms.\nThe score of a bracketed section is the square of the number of significant words falling in the section, divided by the total number of words in the entire sentence.\nThe a priori score for a sentence is computed as the maximum of all scores for the bracketed sections of the sentence.\nWe then sort the sentences by this score.\n3.\nQuery log based (QLt) Many Web queries repeat, and a small number of queries make up a large volume of total searches [9].\nIn order to take advantage of this bias, sentences that contain many past query terms should be promoted to the front of a document, while sentences that contain few query terms should be demoted.\nIn this scheme, the sentences are sorted by the number of sentence terms that occur in the query log.\nTo ensure that long sentences do not dominate over shorter qualitative sentences the score assigned to each sentence is divided by the number of terms in that sentence giving each sentence a score between 0 and 1.\n4.\nQuery log based (QLu) This scheme is as for QLt, but repeated terms in the sentence are only counted once.\nBy re-ordering sentences using schemes ST, QLt or QLu, we aim to terminate snippet generation earlier than if Natural Order is used, but still produce sentences with the same number of unique query terms (d in Figure 2), total number of query terms (c), the same positional score (h+ ) and the same maximum span (k).\nAccordingly, we conducted experiments comparing the methods, the first 80% of the Excite query log was used to reorder sentences when required, and the final 20% for testing.\nFigure 6 shows the differences in snippet scoring components using each of the three methods over the Natural Order method.\nIt is clear that sorting sentences using the Significant Terms (ST) method leads to the smallest change in the sentence scoring components.\nThe greatest change over all methods is in the sentence position (h + ) component of the score, which is to be expected as their is no guarantee that leading and heading sentences are processed at all after sentences are re-ordered.\nThe second most affected component is the number of distinct query terms in a returned sentence, but if only the first 50% of the document is processed with the ST method, there is a drop of only 8% in the number of distinct query terms found in snippets.\nDepending how these various components are weighted to compute an overall snippet score, one can argue that there is little overall affect on scores when processing only half the document using the ST method.\nSpan (k) Term Count (c) Sentence Position (h + l) Distinct Terms (d) 40% 50% 60% 70% ST QLt QLu ST QLt QLu ST QLt QLu ST QLt QLu ST QLt QLu RelativedifferencetoNaturalOrder Documents size used 90% 80% 70% 60% 50% 0% 10% 20% 30% Figure 6: Relative difference in the snippet score components compared to Natural Ordered documents when the amount of documents processed is reduced, and the sentences in the document are reordered using Query Logs (QLt, QLu) or Significant Terms (ST).\n7.\nDISCUSSION In this paper we have described the algorithms and compression scheme that would make a good Snippet Engine sub-system for generating text snippets of the type shown on the results pages of well known Web search engines.\nOur experiments not only show that our scheme is over 50% faster than the obvious baseline, but also reveal some very important aspects of the snippet generation problem.\nPrimarily, caching documents avoids seek costs to secondary memory for each document that is to be summarized, and is vital for fast snippet generation.\nOur caching simulations show that if as little as 1% of the documents can be cached in RAM as part of the Snippet Engine, possibly distributed over many machines, then around 75% of seeks can be avoided.\nOur second major result is that keeping only half of each document in RAM, effectively doubling the cache size, has little affect on the quality of the final snippets generated from those half-documents, provided that the sentences that are kept in memory are chosen using the Significant Term algorithm of Luhn [12].\nBoth our document compression and compaction schemes dramatically reduce the time taken to generate snippets.\nNote that these results are generated using a 100Gb subset of the Web, and the Excite query log gathered from the same period as that subset was created.\nWe are assuming, as there is no evidence to the contrary, that this collection and log is representative of search engine input in other domains.\nIn particular, we can scale our results to examine what resources would be required, using our scheme, to provide a Snippet Engine for the entire World Wide Web.\nWe will assume that the Snippet Engine is distributed across M machines, and that there are N web pages in the collection to be indexed and served by the search engine.\nWe also assume a balanced load for each machine, so each machine serves about N/M documents, which is easily achieved in practice.\nEach machine, therefore, requires RAM to hold the following.\n\u2022 The CTS model, which should be 1/1000 of the size of the uncompressed collection (using results in Figure 5 and Williams et al. [23]).\nAssuming an average uncompressed document size of 8 Kb [11], this would require N/M \u00d7 8.192 bytes of memory.\n\u2022 A cache of 1% of all N/M documents.\nEach document requires 2 Kb when compressed with CTS (Table 1), and only half of each document is required using ST sentence reordering, requiring a total of N/M \u00d70.01\u00d7 1024 bytes.\n\u2022 The offset array that gives the start position of each document in the single, compressed file: 8 bytes per N/M documents.\nThe total amount of RAM required by a single machine, therefore, would be N/M(8.192 + 10.24 + 8) bytes.\nAssuming that each machine has 8 Gb of RAM, and that there are 20 billion pages to index on the Web, a total of M = 62 machines would be required for the Snippet Engine.\nOf course in practice, more machines may be required to manage the distributed system, to provide backup services for failed machines, and other networking services.\nThese machines would also need access to 37 Tb of disk to store the compressed document representations that were not in cache.\nIn this work we have deliberately avoided committing to one particular scoring method for sentences in documents.\nRather, we have reported accuracy results in terms of the four components that have been previously shown to be important in determining useful snippets [20].\nThe CTS method can incorporate any new metrics that may arise in the future that are calculated on whole words.\nThe document compaction techniques using sentence re-ordering, however, remove the spatial relationship between sentences, and so if a scoring technique relies on the position of a sentence within a document, the aggressive compaction techniques reported here cannot be used.\nA variation on the semi-static compression approach we have adopted in this work has been used successfully in previous search engine design [24], but there are alternate compression schemes that allow direct matching in compressed text (see Navarro and M\u00a8akinen [15] for a recent survey.)\nAs seek time dominates the snippet generation process, we have not focused on this portion of the snippet generation in detail in this paper.\nWe will explore alternate compression schemes in future work.\nAcknowledgments This work was supported in part by ARC Discovery Project DP0558916 (AT).\nThanks to Nick Lester and Justin Zobel for valuable discussions.\n8.\nREFERENCES [1] S. Brin and L. Page.\nThe anatomy of a large-scale hypertextual Web search engine.\nIn WWW7, pages 107-117, 1998.\n[2] R. Fagin, Ravi K., K. S. McCurley, J. Novak, D. Sivakumar, J. A. Tomlin, and D. P. Williamson.\nSearching the workplace web.\nIn WWW2003, Budapest, Hungary, May 2003.\n[3] T. Fagni, R. Perego, F. Silvestri, and S. Orlando.\nBoosting the performance of web search engines: Caching and prefetching query results by exploiting historical usage data.\nACM Trans.\nInf.\nSyst., 24(1):51-78, 2006.\n[4] J-L Gailly and M. Adler.\nZlib Compression Library.\nwww.zlib.net.\nAccessed January 2007.\n[5] S. Garcia, H.E. Williams, and A. Cannane.\nAccess-ordered indexes.\nIn V. Estivill-Castro, editor, Proc.\nAustralasian Computer Science Conference, pages 7-14, Dunedin, New Zealand, 2004.\n[6] S. Ghemawat, H. Gobioff, and S. Leung.\nThe google file system.\nIn SOSP ``03: Proc.\nof the 19th ACM Symposium on Operating Systems Principles, pages 29-43, New York, NY, USA, 2003.\nACM Press.\n[7] J. Goldstein, M. Kantrowitz, V. Mittal, and J. Carbonell.\nSummarizing text documents: sentence selection and evaluation metrics.\nIn SIGIR99, pages 121-128, 1999.\n[8] D. Hawking, Nick C., and Paul Thistlewaite.\nOverview of TREC-7 Very Large Collection Track.\nIn Proc.\nof TREC-7, pages 91-104, November 1998.\n[9] B. J. Jansen, A. Spink, and J. Pedersen.\nA temporal comparison of altavista web searching.\nJ. Am.\nSoc.\nInf.\nSci.\nTech.\n(JASIST), 56(6):559-570, April 2005.\n[10] J. Kupiec, J. Pedersen, and F. Chen.\nA trainable document summarizer.\nIn SIGIR95, pages 68-73, 1995.\n[11] S. Lawrence and C.L. Giles.\nAccessibility of information on the web.\nNature, 400:107-109, July 1999.\n[12] H.P. Luhn.\nThe automatic creation of literature abstracts.\nIBM Journal, pages 159-165, April 1958.\n[13] I. Mani.\nAutomatic Summarization, volume 3 of Natural Language Processing.\nJohn Benjamins Publishing Company, Amsterdam/Philadelphia, 2001.\n[14] A. Moffat, J. Zobel, and N. Sharman.\nText compression for dynamic document databases.\nKnowledge and Data Engineering, 9(2):302-313, 1997.\n[15] G. Navarro and V. M\u00a8akinen.\nCompressed full text indexes.\nACM Computing Surveys, 2007.\nTo appear.\n[16] D. R. Radev, E. Hovy, and K. McKeown.\nIntroduction to the special issue on summarization.\nComput.\nLinguist., 28(4):399-408, 2002.\n[17] M. Richardson, A. Prakash, and E. Brill.\nBeyond pagerank: machine learning for static ranking.\nIn WWW06, pages 707-715, 2006.\n[18] T. Sakai and K. Sparck-Jones.\nGeneric summaries for indexing in information retrieval.\nIn SIGIR01, pages 190-198, 2001.\n[19] H. G. Silber and K. F. McCoy.\nEfficiently computed lexical chains as an intermediate representation for automatic text summarization.\nComput.\nLinguist., 28(4):487-496, 2002.\n[20] A. Tombros and M. Sanderson.\nAdvantages of query biased summaries in information retrieval.\nIn SIGIR98, pages 2-10, Melbourne, Aust., August 1998.\n[21] R. W. White, I. Ruthven, and J. M. Jose.\nFinding relevant documents using top ranking sentences: an evaluation of two alternative schemes.\nIn SIGIR02, pages 57-64, 2002.\n[22] H. E. Williams and J. Zobel.\nCompressing integers for fast file access.\nComp.\nJ., 42(3):193-201, 1999.\n[23] H.E. Williams and J. Zobel.\nSearchable words on the Web.\nInternational Journal on Digital Libraries, 5(2):99-105, April 2005.\n[24] I. H. Witten, A. Moffat, and T. C. Bell.\nManaging Gigabytes: Compressing and Indexing Documents and Images.\nMorgan Kaufmann Publishing, San Francisco, second edition, May 1999.\n[25] The Zettair Search Engine.\nwww.seg.rmit.edu.au/zettair.\nAccessed January 2007.", "lvl-3": "Fast Generation of Result Snippets in Web Search\nABSTRACT\nThe presentation of query biased document snippets as part of results pages presented by search engines has become an expectation of search engine users .\nIn this paper we explore the algorithms and data structures required as part of a search engine to allow efficient generation of query biased snippets .\nWe begin by proposing and analysing a document compression method that reduces snippet generation time by 58 % over a baseline using the zlib compression library .\nThese experiments reveal that finding documents on secondary storage dominates the total cost of generating snippets , and so caching documents in RAM is essential for a fast snippet generation process .\nUsing simulation , we examine snippet generation performance for different size RAM caches .\nFinally we propose and analyse document reordering and compaction , revealing a scheme that increases the number of document cache hits with only a marginal affect on snippet quality .\nThis scheme effectively doubles the number of documents that can fit in a fixed size cache .\n1 .\nINTRODUCTION\nEach result in search results list delivered by current WWW search engines such as search.yahoo.com , google.com and search.msn.com typically contains the title and URL of the actual document , links to live and cached versions of the document and sometimes an indication of file size and type .\nIn addition , one or more snippets are usually presented , giving the searcher a sneak preview of the document contents .\nSnippets are short fragments of text extracted from the document content ( or its metadata ) .\nThey may be static ( for example , always show the first 50 words of the document , or the content of its description metadata , or a description taken from a directory site such as dmoz.org ) or query-biased [ 20 ] .\nA query-biased snippet is one selectively extracted on the basis of its relation to the searcher 's query .\nThe addition of informative snippets to search results may substantially increase their value to searchers .\nAccurate snippets allow the searcher to make good decisions about which results are worth accessing and which can be ignored .\nIn the best case , snippets may obviate the need to open any documents by directly providing the answer to the searcher 's real information need , such as the contact details of a person or an organization .\nGeneration of query-biased snippets by Web search engines indexing of the order of ten billion web pages and handling hundreds of millions of search queries per day imposes a very significant computational load ( remembering that each search typically generates ten snippets ) .\nThe simpleminded approach of keeping a copy of each document in a file and generating snippets by opening and scanning files , works when query rates are low and collections are small , but does not scale to the degree required .\nThe overhead of opening and reading ten files per query on top of accessing the index structure to locate them , would be manifestly excessive under heavy query load .\nEven storing ten billion files and the corresponding hundreds of terabytes of data is beyond the reach of traditional filesystems .\nSpecial-purpose filesystems have been built to address these problems [ 6 ] .\nNote that the utility of snippets is by no means restricted to whole-of-Web search applications .\nEfficient generation of snippets is also important at the scale of whole-of-government search services such as www.firstgov.gov ( c. 25 million pages ) and govsearch.australia.gov.au ( c. 5 million pages ) and within large enterprises such as IBM [ 2 ] ( c. 50 million pages ) .\nSnippets may be even more useful in database or filesystem search applications in which no useful URL or title information is present .\nWe present a new algorithm and compact single-file structure designed for rapid generation of high quality snippets and compare its space/time performance against an obvious baseline based on the zlib compressor on various data sets .\nWe report the proportion of time spent for disk seeks , disk reads and cpu processing ; demonstrating that the time for locating each document ( seek time ) dominates , as expected .\nAs the time to process a document in RAM is small in comparison to locating and reading the document into memory , it may seem that compression is not required .\nHowever , this is only true if there is no caching of documents in RAM .\nControlling the RAM of physical systems for experimentation is difficult , hence we use simulation to show that caching documents dramatically improves the performance of snippet generation .\nIn turn , the more documents can be compressed , the more can fit in cache , and hence the more disk seeks can be avoided : the classic data compression tradeoff that is exploited in inverted file structures and computing ranked document lists [ 24 ] .\nAs hitting the document cache is important , we examine document compaction , as opposed to compression , schemes by imposing an a priori ordering of sentences within a document , and then only allowing leading sentences into cache for each document .\nThis leads to further time savings , with only marginal impact on the quality of the snippets returned .\n2 .\nRELATED WORK\nSnippet generation is a special type of extractive document summarization , in which sentences , or sentence fragments , are selected for inclusion in the summary on the basis of the degree to which they match the search query .\nThis process was given the name of query-biased summarization by Tombros and Sanderson [ 20 ] The reader is referred to Mani [ 13 ] and to Radev et al. [ 16 ] for overviews of the very many different applications of summarization and for the equally diverse methods for producing summaries .\nEarly Web search engines presented query-independent snippets consisting of the first k bytes of the result document .\nGenerating these is clearly much simpler and much less computationally expensive than processing documents to extract query biased summaries , as there is no need to search the document for text fragments containing query terms .\nTo our knowledge , Google was the first whole-ofWeb search engine to provide query biased summaries , but summarization is listed by Brin and Page [ 1 ] only under the heading of future work .\nMost of the experimental work using query-biased summarization has focused on comparing their value to searchers relative to other types of summary [ 20 , 21 ] , rather than efficient generation of summaries .\nDespite the importance of efficient summary generation in Web search , few algorithms appear in the literature .\nSilber and McKoy [ 19 ] describe a linear-time lexical chaining algorithm for use in generic summaries , but offer no empirical data for the performance of their algorithm .\nWhite et al [ 21 ] report some experimental timings of their WebDocSum system , but the snippet generation algorithms themselves are not isolated , so it is difficult to infer snippet generation time comparable to the times we report in this paper .\n3 .\nSEARCH ENGINE ARCHITECTURES\nSIGIR 2007 Proceedings Session 6 : Summaries\n4 .\nTHE SNIPPET ENGINE\n4.1 Parsing Web Documents\n4.2 Baseline Snippet Engine\n4.3 The CTS Snippet Engine\nSIGIR 2007 Proceedings Session 6 : Summaries\n4.4 Experimental assessment of CTS\nQueries grouped in 100 's\n5 .\nDOCUMENT CACHING\n6 .\nSENTENCE REORDERING\nDocuments size used\n7 .\nDISCUSSION\nSIGIR 2007 Proceedings Session 6 : Summaries\nN/M documents .\nThe total amount of RAM required by a single machine , therefore , would be N/M ( 8.192 + 10.24 + 8 ) bytes .\nAssuming that each machine has 8 Gb of RAM , and that there are 20 billion pages to index on the Web , a total of M = 62 machines would be required for the Snippet Engine .\nOf course in practice , more machines may be required to manage the distributed system , to provide backup services for failed machines , and other networking services .\nThese machines would also need access to 37 Tb of disk to store the compressed document representations that were not in cache .\nIn this work we have deliberately avoided committing to one particular scoring method for sentences in documents .\nRather , we have reported accuracy results in terms of the four components that have been previously shown to be important in determining useful snippets [ 20 ] .\nThe CTS method can incorporate any new metrics that may arise in the future that are calculated on whole words .\nThe document compaction techniques using sentence re-ordering , however , remove the spatial relationship between sentences , and so if a scoring technique relies on the position of a sentence within a document , the aggressive compaction techniques reported here can not be used .\nA variation on the semi-static compression approach we have adopted in this work has been used successfully in previous search engine design [ 24 ] , but there are alternate compression schemes that allow direct matching in compressed text ( see Navarro and M \u00a8 akinen [ 15 ] for a recent survey . )\nAs seek time dominates the snippet generation process , we have not focused on this portion of the snippet generation in detail in this paper .\nWe will explore alternate compression schemes in future work .", "lvl-4": "Fast Generation of Result Snippets in Web Search\nABSTRACT\nThe presentation of query biased document snippets as part of results pages presented by search engines has become an expectation of search engine users .\nIn this paper we explore the algorithms and data structures required as part of a search engine to allow efficient generation of query biased snippets .\nWe begin by proposing and analysing a document compression method that reduces snippet generation time by 58 % over a baseline using the zlib compression library .\nThese experiments reveal that finding documents on secondary storage dominates the total cost of generating snippets , and so caching documents in RAM is essential for a fast snippet generation process .\nUsing simulation , we examine snippet generation performance for different size RAM caches .\nFinally we propose and analyse document reordering and compaction , revealing a scheme that increases the number of document cache hits with only a marginal affect on snippet quality .\nThis scheme effectively doubles the number of documents that can fit in a fixed size cache .\n1 .\nINTRODUCTION\nEach result in search results list delivered by current WWW search engines such as search.yahoo.com , google.com and search.msn.com typically contains the title and URL of the actual document , links to live and cached versions of the document and sometimes an indication of file size and type .\nIn addition , one or more snippets are usually presented , giving the searcher a sneak preview of the document contents .\nSnippets are short fragments of text extracted from the document content ( or its metadata ) .\nA query-biased snippet is one selectively extracted on the basis of its relation to the searcher 's query .\nThe addition of informative snippets to search results may substantially increase their value to searchers .\nAccurate snippets allow the searcher to make good decisions about which results are worth accessing and which can be ignored .\nIn the best case , snippets may obviate the need to open any documents by directly providing the answer to the searcher 's real information need , such as the contact details of a person or an organization .\nGeneration of query-biased snippets by Web search engines indexing of the order of ten billion web pages and handling hundreds of millions of search queries per day imposes a very significant computational load ( remembering that each search typically generates ten snippets ) .\nThe simpleminded approach of keeping a copy of each document in a file and generating snippets by opening and scanning files , works when query rates are low and collections are small , but does not scale to the degree required .\nThe overhead of opening and reading ten files per query on top of accessing the index structure to locate them , would be manifestly excessive under heavy query load .\nEven storing ten billion files and the corresponding hundreds of terabytes of data is beyond the reach of traditional filesystems .\nNote that the utility of snippets is by no means restricted to whole-of-Web search applications .\nEfficient generation of snippets is also important at the scale of whole-of-government search services such as www.firstgov.gov ( c. 25 million pages ) and govsearch.australia.gov.au ( c. 5 million pages ) and within large enterprises such as IBM [ 2 ] ( c. 50 million pages ) .\nSnippets may be even more useful in database or filesystem search applications in which no useful URL or title information is present .\nWe present a new algorithm and compact single-file structure designed for rapid generation of high quality snippets and compare its space/time performance against an obvious baseline based on the zlib compressor on various data sets .\nWe report the proportion of time spent for disk seeks , disk reads and cpu processing ; demonstrating that the time for locating each document ( seek time ) dominates , as expected .\nAs the time to process a document in RAM is small in comparison to locating and reading the document into memory , it may seem that compression is not required .\nHowever , this is only true if there is no caching of documents in RAM .\nControlling the RAM of physical systems for experimentation is difficult , hence we use simulation to show that caching documents dramatically improves the performance of snippet generation .\nIn turn , the more documents can be compressed , the more can fit in cache , and hence the more disk seeks can be avoided : the classic data compression tradeoff that is exploited in inverted file structures and computing ranked document lists [ 24 ] .\nAs hitting the document cache is important , we examine document compaction , as opposed to compression , schemes by imposing an a priori ordering of sentences within a document , and then only allowing leading sentences into cache for each document .\nThis leads to further time savings , with only marginal impact on the quality of the snippets returned .\n2 .\nRELATED WORK\nSnippet generation is a special type of extractive document summarization , in which sentences , or sentence fragments , are selected for inclusion in the summary on the basis of the degree to which they match the search query .\nEarly Web search engines presented query-independent snippets consisting of the first k bytes of the result document .\nGenerating these is clearly much simpler and much less computationally expensive than processing documents to extract query biased summaries , as there is no need to search the document for text fragments containing query terms .\nTo our knowledge , Google was the first whole-ofWeb search engine to provide query biased summaries , but summarization is listed by Brin and Page [ 1 ] only under the heading of future work .\nMost of the experimental work using query-biased summarization has focused on comparing their value to searchers relative to other types of summary [ 20 , 21 ] , rather than efficient generation of summaries .\nDespite the importance of efficient summary generation in Web search , few algorithms appear in the literature .\nWhite et al [ 21 ] report some experimental timings of their WebDocSum system , but the snippet generation algorithms themselves are not isolated , so it is difficult to infer snippet generation time comparable to the times we report in this paper .\nN/M documents .\nThe total amount of RAM required by a single machine , therefore , would be N/M ( 8.192 + 10.24 + 8 ) bytes .\nAssuming that each machine has 8 Gb of RAM , and that there are 20 billion pages to index on the Web , a total of M = 62 machines would be required for the Snippet Engine .\nThese machines would also need access to 37 Tb of disk to store the compressed document representations that were not in cache .\nIn this work we have deliberately avoided committing to one particular scoring method for sentences in documents .\nRather , we have reported accuracy results in terms of the four components that have been previously shown to be important in determining useful snippets [ 20 ] .\nThe document compaction techniques using sentence re-ordering , however , remove the spatial relationship between sentences , and so if a scoring technique relies on the position of a sentence within a document , the aggressive compaction techniques reported here can not be used .\nAs seek time dominates the snippet generation process , we have not focused on this portion of the snippet generation in detail in this paper .\nWe will explore alternate compression schemes in future work .", "lvl-2": "Fast Generation of Result Snippets in Web Search\nABSTRACT\nThe presentation of query biased document snippets as part of results pages presented by search engines has become an expectation of search engine users .\nIn this paper we explore the algorithms and data structures required as part of a search engine to allow efficient generation of query biased snippets .\nWe begin by proposing and analysing a document compression method that reduces snippet generation time by 58 % over a baseline using the zlib compression library .\nThese experiments reveal that finding documents on secondary storage dominates the total cost of generating snippets , and so caching documents in RAM is essential for a fast snippet generation process .\nUsing simulation , we examine snippet generation performance for different size RAM caches .\nFinally we propose and analyse document reordering and compaction , revealing a scheme that increases the number of document cache hits with only a marginal affect on snippet quality .\nThis scheme effectively doubles the number of documents that can fit in a fixed size cache .\n1 .\nINTRODUCTION\nEach result in search results list delivered by current WWW search engines such as search.yahoo.com , google.com and search.msn.com typically contains the title and URL of the actual document , links to live and cached versions of the document and sometimes an indication of file size and type .\nIn addition , one or more snippets are usually presented , giving the searcher a sneak preview of the document contents .\nSnippets are short fragments of text extracted from the document content ( or its metadata ) .\nThey may be static ( for example , always show the first 50 words of the document , or the content of its description metadata , or a description taken from a directory site such as dmoz.org ) or query-biased [ 20 ] .\nA query-biased snippet is one selectively extracted on the basis of its relation to the searcher 's query .\nThe addition of informative snippets to search results may substantially increase their value to searchers .\nAccurate snippets allow the searcher to make good decisions about which results are worth accessing and which can be ignored .\nIn the best case , snippets may obviate the need to open any documents by directly providing the answer to the searcher 's real information need , such as the contact details of a person or an organization .\nGeneration of query-biased snippets by Web search engines indexing of the order of ten billion web pages and handling hundreds of millions of search queries per day imposes a very significant computational load ( remembering that each search typically generates ten snippets ) .\nThe simpleminded approach of keeping a copy of each document in a file and generating snippets by opening and scanning files , works when query rates are low and collections are small , but does not scale to the degree required .\nThe overhead of opening and reading ten files per query on top of accessing the index structure to locate them , would be manifestly excessive under heavy query load .\nEven storing ten billion files and the corresponding hundreds of terabytes of data is beyond the reach of traditional filesystems .\nSpecial-purpose filesystems have been built to address these problems [ 6 ] .\nNote that the utility of snippets is by no means restricted to whole-of-Web search applications .\nEfficient generation of snippets is also important at the scale of whole-of-government search services such as www.firstgov.gov ( c. 25 million pages ) and govsearch.australia.gov.au ( c. 5 million pages ) and within large enterprises such as IBM [ 2 ] ( c. 50 million pages ) .\nSnippets may be even more useful in database or filesystem search applications in which no useful URL or title information is present .\nWe present a new algorithm and compact single-file structure designed for rapid generation of high quality snippets and compare its space/time performance against an obvious baseline based on the zlib compressor on various data sets .\nWe report the proportion of time spent for disk seeks , disk reads and cpu processing ; demonstrating that the time for locating each document ( seek time ) dominates , as expected .\nAs the time to process a document in RAM is small in comparison to locating and reading the document into memory , it may seem that compression is not required .\nHowever , this is only true if there is no caching of documents in RAM .\nControlling the RAM of physical systems for experimentation is difficult , hence we use simulation to show that caching documents dramatically improves the performance of snippet generation .\nIn turn , the more documents can be compressed , the more can fit in cache , and hence the more disk seeks can be avoided : the classic data compression tradeoff that is exploited in inverted file structures and computing ranked document lists [ 24 ] .\nAs hitting the document cache is important , we examine document compaction , as opposed to compression , schemes by imposing an a priori ordering of sentences within a document , and then only allowing leading sentences into cache for each document .\nThis leads to further time savings , with only marginal impact on the quality of the snippets returned .\n2 .\nRELATED WORK\nSnippet generation is a special type of extractive document summarization , in which sentences , or sentence fragments , are selected for inclusion in the summary on the basis of the degree to which they match the search query .\nThis process was given the name of query-biased summarization by Tombros and Sanderson [ 20 ] The reader is referred to Mani [ 13 ] and to Radev et al. [ 16 ] for overviews of the very many different applications of summarization and for the equally diverse methods for producing summaries .\nEarly Web search engines presented query-independent snippets consisting of the first k bytes of the result document .\nGenerating these is clearly much simpler and much less computationally expensive than processing documents to extract query biased summaries , as there is no need to search the document for text fragments containing query terms .\nTo our knowledge , Google was the first whole-ofWeb search engine to provide query biased summaries , but summarization is listed by Brin and Page [ 1 ] only under the heading of future work .\nMost of the experimental work using query-biased summarization has focused on comparing their value to searchers relative to other types of summary [ 20 , 21 ] , rather than efficient generation of summaries .\nDespite the importance of efficient summary generation in Web search , few algorithms appear in the literature .\nSilber and McKoy [ 19 ] describe a linear-time lexical chaining algorithm for use in generic summaries , but offer no empirical data for the performance of their algorithm .\nWhite et al [ 21 ] report some experimental timings of their WebDocSum system , but the snippet generation algorithms themselves are not isolated , so it is difficult to infer snippet generation time comparable to the times we report in this paper .\n3 .\nSEARCH ENGINE ARCHITECTURES\nA search engine must perform a variety of activities , and is comprised of many sub-systems , as depicted by the boxes in Figure 1 .\nNote that there may be several other sub-systems such as the `` Advertising Engine '' or the `` Parsing Engine '' that could easily be added to the diagram , but we have concentrated on the sub-systems that are relevant to snippet generation .\nDepending on the number of documents that the search engine indexes , the data and processes for each\nFigure 1 : An abstraction of some of the sub-systems\nin a search engine .\nDepending on the number of documents indexed , each sub-system could reside on a single machine , be distributed across thousands of machines , or a combination of both .\nsub-system could be distributed over many machines , or all occupy a single server and filesystem , competing with each other for resources .\nSimilarly , it may be more efficient to combine some sub-systems in an implementation of the diagram .\nFor example , the meta-data such as document title and URL requires minimal computation apart from highlighting query words , but we note that disk seeking is likely to be minimized if title , URL and fixed summary information is stored contiguously with the text from which query biased summaries are extracted .\nHere we ignore the fixed text and consider only the generation of query biased summaries : we concentrate on the `` Snippet Engine '' .\nIn addition to data and programs operating on that data , each sub-system also has its own memory management scheme .\nThe memory management system may simply be the memory hierarchy provided by the operating system used on machines in the sub-system , or it may be explicitly coded to optimise the processes in the sub-system .\nThere are many papers on caching in search engines ( see [ 3 ] and references therein for a current summary ) , but it seems reasonable that there is a query cache in the Query Engine that stores precomputed final result pages for very popular queries .\nWhen one of the popular queries is issued , the result page is fetched straight from the query cache .\nIf the issued query is not in the query cache , then the Query Engine uses the four sub-systems in turn to assemble a results page .\n1 .\nThe Lexicon Engine maps query terms to integers .\n2 .\nThe Ranking Engine retrieves inverted lists for each term , using them to get a ranked list of documents .\n3 .\nThe Snippet Engine uses those document numbers and query term numbers to generate snippets .\n4 .\nThe Meta Data Engine fetches other information about each document to construct the results page .\nSIGIR 2007 Proceedings Session 6 : Summaries\nIN A document broken into one sentence per line , and a sequence of query terms .\n1 For each line of the text , L = [ w1 , w2 , ... , wm ] 2 Let h be 1 if L is a heading , 0 otherwise .\n3 Let B be 2 if L is the first line of a document , 1 if it is the second line , 0 otherwise .\n4 Let c be the number of wi that are query terms , counting repetitions .\n5 Let d be the number of distinct query terms that match some wi .\n6 Identify the longest contiguous run of query terms in L , say wj ... wj + k.\n7 Use a weighted combination of c , d , k , h and B to derive a score s. 8 Insert L into a max-heap using s as the key .\nOUT Remove the number of sentences required from the heap to form the summary .\nFigure 2 : Simple sentence ranker that operates on raw text with one sentence per line .\n4 .\nTHE SNIPPET ENGINE\nFor each document identifier passed to the Snippet Engine , the engine must generate text , preferably containing query terms , that attempts to summarize that document .\nPrevious work on summarization identifies the sentence as the minimal unit for extraction and presentation to the user [ 12 ] .\nAccordingly , we also assume a web snippet extraction process will extract sentences from documents .\nIn order to construct a snippet , all sentences in a document should be ranked against the query , and the top two or three returned as the snippet .\nThe scoring of sentences against queries has been explored in several papers [ 7 , 12 , 18 , 20 , 21 ] , with different features of sentences deemed important .\nBased on these observations , Figure 2 , shows the general algorithm for scoring sentences in relevant documents , with the highest scoring sentences making the snippet for each document .\nThe final score of a sentence , assigned in Step 7 , can be derived in many different ways .\nIn order to avoid bias towards any particular scoring mechanism , we compare sentence quality later in the paper using the individual components of the score , rather than an arbitrary combination of the components .\n4.1 Parsing Web Documents\nUnlike well edited text collections that are often the target for summarization systems , Web data is often poorly structured , poorly punctuated , and contains a lot of data that do not form part of valid sentences that would be candidates for parts of snippets .\nWe assume that the documents passed to the Snippet Engine by the Indexing Engine have all HTML tags and JavaScript removed , and that each document is reduced to a series of word tokens separated by non-word tokens .\nWe define a word token as a sequence of alphanumeric characters , while a non-word is a sequence of non-alphanumeric characters such as whitespace and the other punctuation symbols .\nBoth are limited to a maximum of 50 characters .\nAdjacent , repeating characters are removed from the punctuation .\nIncluded in the punctuation set is a special end of sentence marker which replaces the usual three sentence terminators `` ?!\n. ''\n.\nOften these explicit punctuation characters are missing , and so HTML tags such as
and

are assumed to terminate sentences .\nIn addition , a sentence must contain at least five words and no more than twenty words , with longer or shorter sentences being broken and joined as required to meet these criteria [ 10 ] .\nUnterminated HTML tags -- that is , tags with an open brace , but no close brace -- cause all text from the open brace to the next open brace to be discarded .\nA major problem in summarizing web pages is the presence of large amounts of promotional and navigational material ( `` navbars '' ) visually above and to the left of the actual page content .\nFor example , `` The most wonderful company on earth .\nProducts .\nService .\nAbout us .\nContact us .\nTry before you buy . ''\nSimilar , but often not identical , navigational material is typically presented on every page within a site .\nThis material tends to lower the quality of summaries and slow down summary generation .\nIn our experiments we did not use any particular heuristics for removing navigational information as the test collection in use ( wt100g ) pre-dates the widespread take up of the current style of web publishing .\nIn wt100g , the average web page size is more than half the current Web average [ 11 ] .\nAnecdotally , the increase is due to inclusion of sophisticated navigational and interface elements and the JavaScript functions to support them .\nHaving defined the format of documents that are presented to the Snippet Engine , we now define our Compressed Token System ( CTS ) document storage scheme , and the baseline system used for comparison .\n4.2 Baseline Snippet Engine\nAn obvious document representation scheme is to simply compress each document with a well known adaptive compressor , and then decompress the document as required [ 1 ] , using a string matching algorithm to effect the algorithm in Figure 2 .\nAccordingly , we implemented such a system , using zlib [ 4 ] with default parameters to compress every document after it has been parsed as in Section 4.1 .\nEach document is stored in a single file .\nWhile manageable for our small test collections or small enterprises with millions of documents , a full Web search engine may require multiple documents to inhabit single files , or a special purpose filesystem [ 6 ] .\nFor snippet generation , the required documents are decompressed one at a time , and a linear search for provided query terms is employed .\nThe search is optimized for our specific task , which is restricted to matching whole words and the sentence terminating token , rather than general pattern matching .\n4.3 The CTS Snippet Engine\nSeveral optimizations over the baseline are possible .\nThe first is to employ a semi-static compression method over the entire document collection , which will allow faster decompression with minimal compression loss [ 24 ] .\nUsing a semistatic approach involves mapping words and non-words produced by the parser to single integer tokens , with frequent symbols receiving small integers , and then choosing a coding scheme that assigns small numbers a small number of bits .\nWords and non-words strictly alternate in the compressed file , which always begins with a word .\nIn this instance we simply assign each symbol its ordinal number in a list of symbols sorted by frequency .\nWe use the\nSIGIR 2007 Proceedings Session 6 : Summaries\nvbyte coding scheme to code the word tokens [ 22 ] .\nThe set of non-words is limited to the 64 most common punctuation sequences in the collection itself , and are encoded with a flat 6-bit binary code .\nThe remaining 2 bits of each punctuation symbol are used to store capitalization information .\nThe process of computing the semi-static model is complicated by the fact that the number of words and non-words appearing in large web collections is high .\nIf we stored all words and non-words appearing in the collection , and their associated frequency , many gigabytes of RAM or a B-tree or similar on-disk structure would be required [ 23 ] .\nMoffat et al. [ 14 ] have examined schemes for pruning models during compression using large alphabets , and conclude that rarely occurring terms need not reside in the model .\nRather , rare terms are spelt out in the final compressed file , using a special word token ( ESCAPE symbol ) , to signal their occurrence .\nDuring the first pass of encoding , two move-to-front queues are kept ; one for words and one for non-words .\nWhenever the available memory is consumed and a new symbol is discovered in the collection , an existing symbol is discarded from the end of the queue .\nIn our implementation , we enforce the stricter condition on eviction that , where possible , the evicted symbol should have a frequency of one .\nIf there is no symbol with frequency one in the last half of the queue , then we evict symbols of frequency two , and so on until enough space is available in the model for the new symbol .\nThe second pass of encoding replaces each word with its vbyte encoded number , or the ESCAPE symbol and an ASCII representation of the word if it is not in the model .\nSimilarly each non-word sequence is replaced with its codeword , or the codeword for a single space character if it is not in the model .\nWe note that this lossless compression of non-words is acceptable when the documents are used for snippet generation , but may not be acceptable for a document database .\nWe assume that a separate sub-system would hold cached documents for other purposes where exact punctuation is important .\nWhile this semi-static scheme should allow faster decompression than the baseline , it also readily allows direct matching of query terms as compressed integers in the compressed file .\nThat is , sentences can be scored without having to decompress a document , and only the sentences returned as part of a snippet need to be decoded .\nThe CTS system stores all documents contiguously in one file , and an auxiliary table of 64 bit integers indicating the start offset of each document in the file .\nFurther , it must have access to the reverse mapping of term numbers , allowing those words not spelt out in the document to be recovered and returned to the Query Engine as strings .\nThe first of these data structures can be readily partitioned and distributed if the Snippet Engine occupies multiple machines ; the second , however , is not so easily partitioned , as any document on a remote machine might require access to the whole integer-to-string mapping .\nThis is the second reason for employing the model pruning step during construction of the semi-static code : it limits the size of the reverse mapping table that should be present on every machine implementing the Snippet Engine .\n4.4 Experimental assessment of CTS\nAll experiments reported in this paper were run on a Sun Fire V210 Server running Solaris 10 .\nThe machine consists of dual 1.34 GHz UltraSPARC IIIi processors and 4GB of\nTable 1 : Total storage space ( Mb ) for documents for the three test collections both compressed , and uncompressed .\nQueries grouped in 100 's\nFigure 3 : Time to generate snippets for 10 documents per query , averaged over buckets of 100 queries , for the first 7000 Excite queries on WT10G .\nRAM .\nAll source code was compiled using gcc4 .1.1 with - O9 optimisation .\nTimings were run on an otherwise unoccupied machine and were averaged over 10 runs , with memory flushed between runs to eliminate any caching of data files .\nIn the absence of evidence to the contrary , we assume that it is important to model realistic query arrival sequences and the distribution of query repetitions for our experiments .\nConsequently , test collections which lack real query logs , such as TREC Ad Hoc and .\nGOV2 were not considered suitable .\nObtaining extensive query logs and associated result doc-ids for a corresponding large collection is not easy .\nWe have used two collections ( WT10G and WT100G ) from the TREC Web Track [ 8 ] coupled with queries from Excite logs from the same ( c. 1997 ) period .\nFurther , we also made use of a medium sized collection WT50G , obtained by randomly sampling half of the documents from WT100G .\nThe first two rows of Table 1 give the number of documents and the size in Mb of these collections .\nThe final two rows of Table 1 show the size of the resulting document sets after compression with the baseline and CTS schemes .\nAs expected , CTS admits a small compression loss over zlib , but both substantially reduce the size of the text to about 20 % of the original , uncompressed size .\nNote that the figures for CTS do not include the reverse mapping from integer token to string that is required to produce the final snippets as that occupies RAM .\nIt is 1024 Mb in these experiments .\nThe Zettair search engine [ 25 ] was used to produce a list of documents to summarize for each query .\nFor the majority of the experiments the Okapi BM25 scoring scheme was used to determine document rankings .\nFor the static caching experiments reported in Section 5 , the score of each document\nTable 2 : Average time ( msec ) for the final 7000\nqueries in the Excite logs using the baseline and CTS systems on the 3 test collections .\nis a 50:50 weighted average of the BM25 score ( normalized by the top scoring document for each query ) and a score for each document independent of any query .\nThis is to simulate effects of ranking algorithms like PageRank [ 1 ] on the distribution of document requests to the Snippet Engine .\nIn our case we used the normalized Access Count [ 5 ] computed from the top 20 documents returned to the first 1 million queries from the Excite log to determine the query independent score component .\nPoints on Figure 3 indicate the mean running time to generate 10 snippets for each query , averaged in groups of 100 queries , for the first 7000 queries in the Excite query log .\nOnly the data for WT10G is shown , but the other collections showed similar patterns .\nThe x-axis indicates the group of 100 queries ; for example , 20 indicates the queries 2001 to 2100 .\nClearly there is a caching effect , with times dropping substantially after the first 1000 or so queries are processed .\nAll of this is due to the operating system caching disk blocks and perhaps pre-fetching data ahead of specific read requests .\nThis is evident because the baseline system has no large internal data structures to take advantage of non-disk based caching , it simply opens and processes files , and the speed up is evident for the baseline system .\nPart of this gain is due to the spatial locality of disk references generated by the query stream : repeated queries will already have their document files cached in memory ; and similarly different queries that return the same documents will benefit from document caching .\nBut when the log is processed after removing all but the first request for each document , the pronounced speed-up is still evident as more queries are processed ( not shown in figure ) .\nThis suggests that the operating system ( or the disk itself ) is reading and buffering a larger amount of data than the amount requested and that this brings benefit often enough to make an appreciable difference in snippet generation times .\nThis is confirmed by the curve labeled `` CTS without caching '' , which was generated after mounting the filesystem with a `` no-caching '' option ( directio in Solaris ) .\nWith disk caching turned off , the average time to generate snippets varies little .\nThe time to generate ten snippets for a query , averaged over the final 7000 queries in the Excite log as caching effects have dissipated , are shown in Table 2 .\nOnce the system has stabilized , CTS is over 50 % faster than the Baseline system .\nThis is primarily due to CTS matching single integers for most query words , rather than comparing strings in the baseline system .\nTable 3 shows a break down of the average time to generate ten snippets over the final 7000 queries of the Excite log on the WT50G collection when entire documents are processed , and when only the first half of each document is processed .\nAs can be seen , the majority of time spent generating a snippet is in locating the document on disk ( `` Seek '' ) : 64 % for whole documents , and 75 % for half documents .\nEven if the amount of processing a document must\nTable 3 : Time to generate 10 snippets for a single\nquery ( msec ) for the WT50G collection averaged over the final 7000 Excite queries when either all of each document is processed ( 100 % ) or just the first half of each document ( 50 % ) .\nundergo is halved , as in the second row of the Table , there is only a 14 % reduction in the total time required to generate a snippet .\nAs locating documents in secondary storage occupies such a large proportion of snippet generation time , it seems logical to try and reduce its impact through caching .\n5 .\nDOCUMENT CACHING\nIn Section 3 we observed that the Snippet Engine would have its own RAM in proportion to the size of the document collection .\nFor example , on a whole-of-Web search engine , the Snippet Engine would be distributed over many workstations , each with at least 4 Gb of RAM .\nIn a small enterprise , the Snippet Engine may be sharing RAM with all other sub-systems on a single workstation , hence only have 100 Mb available .\nIn this section we use simulation to measure the number of cache hits in the Snippet Engine as memory size varies .\nWe compare two caching policies : a static cache , where the cache is loaded with as many documents as it can hold before the system begins answering queries , and then never changes ; and a least-recently-used cache , which starts out as for the static cache , but whenever a document is accessed it moves to the front of a queue , and if a document is fetched from disk , the last item in the queue is evicted .\nNote that documents are first loaded into the caches in order of decreasing query independent score , which is computed as described in Section 4.4 .\nThe simulations also assume a query cache exists for the top Q most frequent queries , and that these queries are never processed by the Snippet Engine .\nAll queries passed into the simulations are from the second half of the Excite query log ( the first half being used to compute query independent scores ) , and are stemmed , stopped , and have their terms sorted alphabetically .\nThis final alteration simply allows queries such as `` red dog '' and `` dog red '' to return the same documents , as would be the case in a search engine where explicit phrase operators would be required in the query to enforce term order and proximity .\nFigure 4 shows the percentage of document access that hit cache using the two caching schemes , with Q either 0 or 10,000 , on 535,276 Excite queries on WT100G .\nThe xaxis shows the percentage of documents that are held in the cache , so 1.0 % corresponds to about 185,000 documents .\nFrom this figure it is clear that caching even a small percentage of the documents has a large impact on reducing seek time for snippet generation .\nWith 1 % of documents cached , about 222 Mb for the WT100G collection , around 80 % of disk seeks are avoided .\nThe static cache performs surprisingly well ( squares in Figure 4 ) , but is outperformed by the LRU cache ( circles ) .\nIn an actual implementation of LRU , however , there may be fragmentation of the cache as documents are swapped in and out .\nThe reason for the large impact of the document cache is\nCache size ( % of collection ) Figure 4 : Percentage of the time that the Snippet Engine does not have to go to disk in order to generate a snippet plotted against the size of the document cache as a percentage of all documents in the collection .\nResults are from a simulation on WT100G with 535,276 Excite queries .\nthat , for a particular collection , some documents are much more likely to appear in results lists than others .\nThis effect occurs partly because of the approximately Zipfian query frequency distribution , and partly because most Web search engines employ ranking methods which combine query based scores with static ( a priori ) scores determined from factors such as link graph measures , URL features , spam scores and so on [ 17 ] .\nDocuments with high static scores are much more likely to be retrieved than others .\nIn addition to the document cache , the RAM of the Snippet Engine must also hold the CTS decoding table that maps integers to strings , which is capped by a parameter at compression time ( 1 Gb in our experiments here ) .\nThis is more than compensated for by the reduced size of each document , allowing more documents into the document cache .\nFor example , using CTS reduces the average document size from 5.7 Kb to 1.2 Kb ( as shown in Table 1 ) , so a 2 Gb RAM could hold 368,442 uncompressed documents ( 2 % of the collection ) , or 850,691 documents plus a 1 Gb decompression table ( 5 % of the collection ) .\nIn fact , further experimentation with the model size reveals that the model can in fact be very small and still CTS gives good compression and fast scoring times .\nThis is evidenced in Figure 5 , where the compressed size of WT50G is shown in the solid symbols .\nNote that when no compression is used ( Model Size is 0Mb ) , the collection is only 31 Gb as HTML markup , JavaScript , and repeated punctuation has been discarded as described in Section 4.1 .\nWith a 5 Mb model , the collection size drops by more than half to 14 Gb , and increasing the model size to 750 Mb only elicits a 2 Gb drop in the collection size .\nFigure 5 also shows the average time to score and decode a a snippet ( excluding seek time ) with the different model sizes ( open symbols ) .\nAgain , there is a large speed up when a 5 Mb model is used , but little\nFigure 5 : Collection size of the WT50G collection\nwhen compressed with CTS using different memory limits on the model , and the average time to generate single snippet excluding seek time on 20000 Excite queries using those models .\nimprovement with larger models .\nSimilar results hold for the WT100G collection , where a model of about 10 Mb offers substantial space and time savings over no model at all , but returns diminish as the model size increases .\nApart from compression , there is another approach to reducing the size of each document in the cache : do not store the full document in cache .\nRather store sentences that are likely to be used in snippets in the cache , and if during snippet generation on a cached document the sentence scores do not reach a certain threshold , then retrieve the whole document from disk .\nThis raises questions on how to choose sentences from documents to put in cache , and which to leave on disk , which we address in the next section .\n6 .\nSENTENCE REORDERING\nSentences within each document can be re-ordered so that sentences that are very likely to appear in snippets are at the front of the document , hence processed first at query time , while less likely sentences are relegated to the rear of the document .\nThen , during query time , if k sentences with a score exceeding some threshold are found before the entire document is processed , the remainder of the document is ignored .\nFurther , to improve caching , only the head of each document can be stored in the cache , with the tail residing on disk .\nNote that we assume that the search engine is to provide `` cached copies '' of a document -- that is , the exact text of the document as it was indexed -- then this would be serviced by another sub-system in Figure 1 , and not from the altered copy we store in the Snippet Engine .\nWe now introduce four sentence reordering approaches .\n1 .\nNatural order The first few sentences of a well authored document usually best describe the document content [ 12 ] .\nThus simply processing a document in order should yield a quality snippet .\nUnfortunately , however , web documents are often not well authored , with little editorial or professional\nwriting skills brought to bear on the creation of a work of literary merit .\nMore importantly , perhaps , is that we are producing query-biased snippets , and there is no guarantee that query terms will appear in sentences towards the front of a document .\n2 .\nSignificant terms ( ST ) Luhn introduced the concept of a significant sentence as containing a cluster of significant terms [ 12 ] , a concept found to work well by Tombros and Sanderson [ 20 ] .\nLet fd , t be the frequency of term t in document d , then term t is determined to be significant if\nwhere sd is the number of sentences in document d .\nA bracketed section is defined as a group of terms where the leftmost and rightmost terms are significant terms , and no significant terms in the bracketed section are divided by more than four non-significant terms .\nThe score of a bracketed section is the square of the number of significant words falling in the section , divided by the total number of words in the entire sentence .\nThe a priori score for a sentence is computed as the maximum of all scores for the bracketed sections of the sentence .\nWe then sort the sentences by this score .\n3 .\nQuery log based ( QLt ) Many Web queries repeat , and a small number of queries make up a large volume of total searches [ 9 ] .\nIn order to take advantage of this bias , sentences that contain many past query terms should be promoted to the front of a document , while sentences that contain few query terms should be demoted .\nIn this scheme , the sentences are sorted by the number of sentence terms that occur in the query log .\nTo ensure that long sentences do not dominate over shorter qualitative sentences the score assigned to each sentence is divided by the number of terms in that sentence giving each sentence a score between 0 and 1 .\n4 .\nQuery log based ( QLu ) This scheme is as for QLt , but repeated terms in the sentence are only counted once .\nBy re-ordering sentences using schemes ST , QLt or QLu , we aim to terminate snippet generation earlier than if Natural Order is used , but still produce sentences with the same number of unique query terms ( d in Figure 2 ) , total number of query terms ( c ) , the same positional score ( h + f ) and the same maximum span ( k ) .\nAccordingly , we conducted experiments comparing the methods , the first 80 % of the Excite query log was used to reorder sentences when required , and the final 20 % for testing .\nFigure 6 shows the differences in snippet scoring components using each of the three methods over the Natural Order method .\nIt is clear that sorting sentences using the Significant Terms ( ST ) method leads to the smallest change in the sentence scoring components .\nThe greatest change over all methods is in the sentence position ( h + f ) component of the score , which is to be expected as their is no guarantee that leading and heading sentences are processed at all after sentences are re-ordered .\nThe second most affected component is the number of distinct query terms in a returned sentence , but if only the first 50 % of the document is processed with the ST method , there is a drop of only 8 % in the number of distinct query terms found in snippets .\nDepending how these various components are weighted to compute an overall snippet score , one can argue that there is little overall affect on scores when processing only half the document using the ST method .\nDocuments size used\nFigure 6 : Relative difference in the snippet score components compared to Natural Ordered documents when the amount of documents processed is reduced , and the sentences in the document are reordered using Query Logs ( QLt , QLu ) or Significant Terms ( ST ) .\n7 .\nDISCUSSION\nIn this paper we have described the algorithms and compression scheme that would make a good Snippet Engine sub-system for generating text snippets of the type shown on the results pages of well known Web search engines .\nOur experiments not only show that our scheme is over 50 % faster than the obvious baseline , but also reveal some very important aspects of the snippet generation problem .\nPrimarily , caching documents avoids seek costs to secondary memory for each document that is to be summarized , and is vital for fast snippet generation .\nOur caching simulations show that if as little as 1 % of the documents can be cached in RAM as part of the Snippet Engine , possibly distributed over many machines , then around 75 % of seeks can be avoided .\nOur second major result is that keeping only half of each document in RAM , effectively doubling the cache size , has little affect on the quality of the final snippets generated from those half-documents , provided that the sentences that are kept in memory are chosen using the Significant Term algorithm of Luhn [ 12 ] .\nBoth our document compression and compaction schemes dramatically reduce the time taken to generate snippets .\nNote that these results are generated using a 100Gb subset of the Web , and the Excite query log gathered from the same period as that subset was created .\nWe are assuming , as there is no evidence to the contrary , that this collection and log is representative of search engine input in other domains .\nIn particular , we can scale our results to examine what resources would be required , using our scheme , to provide a Snippet Engine for the entire World Wide Web .\nWe will assume that the Snippet Engine is distributed across M machines , and that there are N web pages in the collection to be indexed and served by the search engine .\nWe also assume a balanced load for each machine , so each machine serves about N/M documents , which is easily achieved in practice .\nEach machine , therefore , requires RAM to hold the following .\n9 The CTS model , which should be 1/1000 of the size of the uncompressed collection ( using results in Fig\nSIGIR 2007 Proceedings Session 6 : Summaries\nure 5 and Williams et al. [ 23 ] ) .\nAssuming an average uncompressed document size of 8 Kb [ 11 ] , this would require N/M \u00d7 8.192 bytes of memory .\n\u2022 A cache of 1 % of all N/M documents .\nEach document requires 2 Kb when compressed with CTS ( Table 1 ) , and only half of each document is required using ST sentence reordering , requiring a total of N/M \u00d7 0.01 \u00d7 1024 bytes .\n\u2022 The offset array that gives the start position of each document in the single , compressed file : 8 bytes per\nN/M documents .\nThe total amount of RAM required by a single machine , therefore , would be N/M ( 8.192 + 10.24 + 8 ) bytes .\nAssuming that each machine has 8 Gb of RAM , and that there are 20 billion pages to index on the Web , a total of M = 62 machines would be required for the Snippet Engine .\nOf course in practice , more machines may be required to manage the distributed system , to provide backup services for failed machines , and other networking services .\nThese machines would also need access to 37 Tb of disk to store the compressed document representations that were not in cache .\nIn this work we have deliberately avoided committing to one particular scoring method for sentences in documents .\nRather , we have reported accuracy results in terms of the four components that have been previously shown to be important in determining useful snippets [ 20 ] .\nThe CTS method can incorporate any new metrics that may arise in the future that are calculated on whole words .\nThe document compaction techniques using sentence re-ordering , however , remove the spatial relationship between sentences , and so if a scoring technique relies on the position of a sentence within a document , the aggressive compaction techniques reported here can not be used .\nA variation on the semi-static compression approach we have adopted in this work has been used successfully in previous search engine design [ 24 ] , but there are alternate compression schemes that allow direct matching in compressed text ( see Navarro and M \u00a8 akinen [ 15 ] for a recent survey . )\nAs seek time dominates the snippet generation process , we have not focused on this portion of the snippet generation in detail in this paper .\nWe will explore alternate compression schemes in future work ."} {"id": "H-8", "title": "", "abstract": "", "keyphrases": ["inform retriev", "evalu", "relev judgement", "reusabl", "lowerest-confid comparison", "mtc", "rtc", "expect", "varianc", "relev distribut", "test collect"], "prmu": [], "lvl-1": "Robust Test Collections for Retrieval Evaluation Ben Carterette Center for Intelligent Information Retrieval Computer Science Department University of Massachusetts Amherst Amherst, MA 01003 carteret@cs.umass.edu ABSTRACT Low-cost methods for acquiring relevance judgments can be a boon to researchers who need to evaluate new retrieval tasks or topics but do not have the resources to make thousands of judgments.\nWhile these judgments are very useful for a one-time evaluation, it is not clear that they can be trusted when re-used to evaluate new systems.\nIn this work, we formally define what it means for judgments to be reusable: the confidence in an evaluation of new systems can be accurately assessed from an existing set of relevance judgments.\nWe then present a method for augmenting a set of relevance judgments with relevance estimates that require no additional assessor effort.\nUsing this method practically guarantees reusability: with as few as five judgments per topic taken from only two systems, we can reliably evaluate a larger set of ten systems.\nEven the smallest sets of judgments can be useful for evaluation of new systems.\nCategories and Subject Descriptors: H.3 Information Storage and Retrieval; H.3.4 Systems and Software: Performance Evaluation General Terms: Experimentation, Measurement, Reliability 1.\nINTRODUCTION Consider an information retrieval researcher who has invented a new retrieval task.\nShe has built a system to perform the task and wants to evaluate it.\nSince the task is new, it is unlikely that there are any extant relevance judgments.\nShe does not have the time or resources to judge every document, or even every retrieved document.\nShe can only judge the documents that seem to be the most informative and stop when she has a reasonable degree of confidence in her conclusions.\nBut what happens when she develops a new system and needs to evaluate it?\nOr another research group decides to implement a system to perform the task?\nCan they reliably reuse the original judgments?\nCan they evaluate without more relevance judgments?\nEvaluation is an important aspect of information retrieval research, but it is only a semi-solved problem: for most retrieval tasks, it is impossible to judge the relevance of every document; there are simply too many of them.\nThe solution used by NIST at TREC (Text REtrieval Conference) is the pooling method [19, 20]: all competing systems contribute N documents to a pool, and every document in that pool is judged.\nThis method creates large sets of judgments that are reusable for training or evaluating new systems that did not contribute to the pool [21].\nThis solution is not adequate for our hypothetical researcher.\nThe pooling method gives thousands of relevance judgments, but it requires many hours of (paid) annotator time.\nAs a result, there have been a slew of recent papers on reducing annotator effort in producing test collections: Cormack et al. [11], Zobel [21], Sanderson and Joho [17], Carterette et al. [8], and Aslam et al. [4], among others.\nAs we will see, the judgments these methods produce can significantly bias the evaluation of a new set of systems.\nReturning to our hypothetical resesarcher, can she reuse her relevance judgments?\nFirst we must formally define what it means to be reusable.\nIn previous work, reusability has been tested by simply assessing the accuracy of a set of relevance judgments at evaluating unseen systems.\nWhile we can say that it was right 75% of the time, or that it had a rank correlation of 0.8, these numbers do not have any predictive power: they do not tell us which systems are likely to be wrong or how confident we should be in any one.\nWe need a more careful definition of reusability.\nSpecifically, the question of reusability is not how accurately we can evaluate new systems.\nA malicious adversary can always produce a new ranked list that has not retrieved any of the judged documents.\nThe real question is how much confidence we have in our evaluations, and, more importantly, whether we can trust our estimates of confidence.\nEven if confidence is not high, as long as we can trust it, we can identify which systems need more judgments in order to increase confidence.\nAny set of judgments, no matter how small, becomes reusable to some degree.\nSmall, reusable test collections could have a huge impact on information retrieval research.\nResearch groups would be able to share the relevance judgments they have done in-house for pilot studies, new tasks, or new topics.\nThe amount of data available to researchers would grow exponentially over time.\n2.\nROBUST EVALUATION Above we gave an intuitive definition of reusability: a collection is reusable if we can trust our estimates of confidence in an evaluation.\nBy that we mean that if we have made some relevance judgments and have, for example, 75% confidence that system A is better than system B, we would like there to be no more than 25% chance that our assessment of the relative quality of the systems will change as we continue to judge documents.\nOur evaluation should be robust to missing judgments.\nIn our previous work, we defined confidence as the probability that the difference in an evaluation measure calculated for two systems is less than zero [8].\nThis notion of confidence is defined in the context of a particular evaluation task that we call comparative evaluation: determining the sign of the difference in an evaluation measure.\nOther evaluation tasks could be defined; estimating the magnitude of the difference or the values of the measures themselves are examples that entail different notions of confidence.\nWe therefore see confidence as a probability estimate.\nOne of the questions we must ask about a probability estimate is what it means.\nWhat does it mean to have 75% confidence that system A is better than system B?\nAs described above, we want it to mean that if we continue to judge documents, there will only be a 25% chance that our assessment will change.\nIf this is what it means, we can trust the confidence estimates.\nBut do we know it has that meaning?\nOur calculation of confidence rested on an assumption about the probability of relevance of unjudged documents, specifically that each unjudged document was equally likely to be relevant or nonrelevant.\nThis assumption is almost certainly not realistic in most IR applications.\nAs it turns out, it is this assumption that determines whether the confidence estimates can eb trusted.\nBefore elaborating on this, we formally define confidence.\n2.1 Estimating Confidence Average precision (AP) is a standard evaluation metric that captures both the ability of a system to rank relevant documents highly (precision) as well as its ability to retrieve relevant documents (recall).\nIt is typically written as the mean precision at the ranks of relevant documents: AP = 1 |R| i\u2208R prec@r(i) where R is the set of relevant documents and r(i) is the rank of document i. Let Xi be a random variable indicating the relevance of document i.\nIf documents are ordered by rank, we can express precision as prec@i = 1/i i j=1 Xj .\nAverage precision then becomes the quadratic equation AP = 1 Xi n i=1 Xi/i i j=1 Xj = 1 Xi n i=1 j\u2265i aijXiXj where aij = 1/ max{r(i), r(j)}.\nUsing aij instead of 1/i allows us to number the documents arbitrarily.\nTo see why this is true, consider a toy example: a list of 3 documents with relevant documents B, C at ranks 1 and 3 and nonrelevant document A at rank 2.\nAverage precision will be 1 2 (1 1 x2 B+ 1 2 xBxA+ 1 3 xBxC + 1 2 x2 A+ 1 3 xAxC + 1 3 x2 C) = 1 2 1 + 2 3 because xA = 0, xB = 1, xC = 1.\nThough the ordering B, A, C is different from the labeling A, B, C, it does not affect the computation.\nWe can now see average precision itself is a random variable with a distribution over all possible assignments of relevance to all documents.\nThis random variable has an expectation, a variance, confidence intervals, and a certain probability of being less than or equal to a given value.\nAll of these are dependent on the probability that document i is relevant: pi = p(Xi = 1).\nSuppose in our previous example we do not know the relevance judgments, but we believe pA = 0.4, pB = 0.8, pC = 0.7.\nWe can then compute e.g. P(AP = 0) = 0.2 \u00b7 0.6 \u00b7 0.3 = 0.036, or P(AP = 1 2 ) = 0.2 \u00b7 0.4 \u00b7 0.7 = 0.056.\nSumming over all possibilities, we can compute expectation and variance: E[AP] \u2248 1 pi aiipi + j>i aij pipj V ar[AP] \u2248 1 ( pi)2 n i a2 iipiqi + j>i a2 ijpipj(1 \u2212 pipj) + i=j 2aiiaijpipj(1 \u2212 pi) + k>j=i 2aijaikpipjpk(1 \u2212 pi) AP asymptotically converges to a normal distribution with expectation and variance as defined above.1 For our comparative evaluation task we are interested in the sign of the difference in two average precisions: \u0394AP = AP1 \u2212 AP2.\nAs we showed in our previous work, \u0394AP has a closed form when documents are ordered arbitrarily: \u0394AP = 1 Xi n i=1 j\u2265i cij XiXj cij = aij \u2212 bij where bij is defined analogously to aij for the second ranking.\nSince AP is normal, \u0394AP is normal as well, meaning we can use the normal cumulative density function to determine the confidence that a difference in AP is less than zero.\nSince topics are independent, we can easily extend this to mean average precision (MAP).\nMAP is also normally distributed; its expectation and variance are: EMAP = 1 T t\u2208T E[APt] (1) VMAP = 1 T2 t\u2208T V ar[APt] \u0394MAP = MAP1 \u2212 MAP2 Confidence can then be estimated by calculating the expectation and variance and using the normal density function to find P(\u0394MAP < 0).\n2.2 Confidence and Robustness Having defined confidence, we turn back to the issue of trust in confidence estimates, and show how it ties into the robustness of the collection to missing judgments.\n1 These are actually approximations to the true expectation and variance, but the error is a negligible O(n2\u2212n ).\nLet Z be the set of all pairs of ranked results for a common set of topics.\nSuppose we have a set of m relevance judgments xm = {x1, x2, ..., xm} (using small x rather than capital X to distinguish between judged and unjudged documents); these are the judgments against which we compute confidence.\nLet Z\u03b1 be the subset of pairs in Z for which we predict that \u0394MAP = \u22121 with confidence \u03b1 given the judgments xm .\nFor the confidence estimates to be accurate, we need at least \u03b1 \u00b7 |Z\u03b1| of these pairs to actually have \u0394MAP = \u22121 after we have judged every document.\nIf they do, we can trust the confidence estimates; our evaluation will be robust to missing judgments.\nIf our confidence estimates are based on unrealistic assumptions, we cannot expect them to be accurate.\nThe assumptions they are based on are the probabilities of relevance pi.\nWe need these to be realistic.\nWe argue that the best possible distribution of relevance p(Xi) is the one that explains all of the data (all of the observations made about the retrieval systems) while at the same time making no unwarranted assumptions.\nThis is known as the principle of maximum entropy [13].\nThe entropy of a random variable X with distribution p(X) is defined as H(p) = \u2212 i p(X = i) log p(X = i).\nThis has found a wide array of uses in computer science and information retrieval.\nThe maximum entropy distribution is the one that maximizes H.\nThis distribution is unique and has an exponential form.\nThe following theorem shows the utility of a maximum entropy distribution for relevance when estimating confidence.\nTheorem 1.\nIf p(Xn |I, xm ) = argmaxpH(p), confidence estimates will be accurate.\nwhere xm is the set of relevance judgments defined above, Xn is the full set of documents that we wish to estimate the relevance of, and I is some information about the documents (unspecified as of now).\nWe forgo the proof for the time being, but it is quite simple.\nThis says that the better the estimates of relevance, the more accurate the evaluation.\nThe task of creating a reusable test collection thus becomes the task of estimating the relevance of unjudged documents.\nThe theorem and its proof say nothing whatsoever about the evaluation metric.\nThe probability estimates are entirely indepedent of the measure we are interested in.\nThis means the same probability estimates can tell us about average precision as well as precision, recall, bpref, etc..\nFurthermore, we could assume that the relevance of documents i and j is independent and achieve the same result, which we state as a corollary: Corollary 1.\nIf p(Xi|I, xm ) = argmaxpH(p), confidence estimates will be accurate.\nThe task therefore becomes the imputation of the missing values of relevance.\nThe theorem implies that the closer we get to the maximum entropy distribution of relevance, the closer we get to robustness.\n3.\nPREDICTING RELEVANCE In our statement of Theorem 1, we left the nature of the information I unspecified.\nOne of the advantages of our confidence estimates is that they admit information from a wide variety of sources; essentially anything that can be modeled can be used as information for predicting relevance.\nA natural source of information is the retrieval systems themselves: how they ranked the judged documents, how often they failed to rank relevant documents, how they perform across topics, and so on.\nIf we treat each system as an information retrieval expert providing an opinion about the relevance of each document, the problem becomes one of expert opinion aggregation.\nThis is similar to the metasearch or data fusion problem in which the task is to take k input systems and merge them into a single ranking.\nAslam et al. [3] previously identified a connection between evaluation and metasearch.\nOur problem has two key differences: 1.\nWe explicitly need probabilities of relevance that we can plug into Eq.\n1; metasearch algorithms have no such requirement.\n2.\nWe are accumulating relevance judgments as we proceed with the evaluation and are able to re-estimate relevance given each new judgment.\nIn light of (1) above, we introduce a probabilistic model for expert combination.\n3.1 A Model for Expert Opinion Aggregation Suppose that each expert j provides a probability of relevance qij = pj(Xi = 1).\nThe information about the relevance of document i will then be the set of k expert opinions I = qi = (qi1, qi2, \u00b7 \u00b7 \u00b7 , qik).\nThe probability distribution we wish to find is the one that maximizes the entropy of pi = p(Xi = 1|qi).\nAs it turns out, finding the maximum entropy model is equivalent to finding the parameters that maximize the likelihood [5].\nBlower [6] explicitly shows that finding the maximum entropy model for a binary variable is equivalent to solving a logistic regression.\nThen pi = p(Xi = 1|qi) = exp k j=1 \u03bbjqij 1 + exp k j=1 \u03bbj qij (2) where \u03bb1, \u00b7 \u00b7 \u00b7 , \u03bbk are the regression parameters.\nWe include a beta prior for p(\u03bbj) with parameters \u03b1, \u03b2.\nThis can be seen as a type of smoothing to account for the fact that the training data is highly biased.\nThis model has the advantage of including the statistical dependence between the experts.\nA model of the same form was shown by Clemen & Winkler to be the best for aggregating expert probabilities [10].\nA similar maximumentropy-motivated approach has been used for expert aggregation [15].\nAslam & Montague [1] used a similar model for metasearch, but assumed independence among experts.\nWhere do the qij s come from?\nUsing raw, uncalibrated scores as predictors will not work because score distributions vary too much between topics.\nA language modeling ranker, for instance, will typically give a much higher score to the top retrieved document for a short query than to the top retrieved document for a long query.\nWe could train a separate predicting model for each topic, but that does not take advantage of all of the information we have: we may only have a handful of judgments for a topic, not enough to train a model to any confidence.\nFurthermore, it seems reasonable to assume that if an expert makes good predictions for one topic, it will make good predictions for other topics as well.\nWe could use a hierarchical model [12], but that will not generalize to unseen topics.\nInstead, we will calibrate the scores of each expert individually so that scores can be compared both within topic and between topic.\nThus our model takes into account not only the dependence between experts, but also the dependence between experts'' performances on different tasks (topics).\n3.2 Calibrating Experts Each expert gives us a score and a rank for each document.\nWe need to convert these to probabilities.\nA method such as the one used by Manmatha et al. [14] could be used to convert scores into probabilities of relevance.\nThe pairwise preference method of Carterette & Petkova [9] could also be used, interpeting the ranking of one document over another as an expression of preference.\nLet q\u2217 ij be expert j``s self-reported probability that document i is relevant.\nIntuitively it seems clear that q\u2217 ij should decrease with rank, and it should be zero if document i is unranked (the expert did not believe it to be relevant).\nThe pairwise preference model can handle these two requirements easily, so we will use it.\nLet \u03b8rj (i) be the relevance coefficient of the document at rank rj(i).\nWe want to find the \u03b8s that maximize the likelihood function: Ljt(\u0398) = rj (i) 0 .\nIf it turns out that \u0394MAP < 0 , we win the dollar .\nOtherwise , we pay out O .\nIf our confidence estimates are perfectly accurate , we break even .\nIf confidence is greater than accuracy , we lose money ; we win if accuracy is greater than confidence .\nCounterintuitively , the most desirable outcome is breaking even : if we lose money , we can not trust the confidence estimates , but if we win money , we have either underestimated confidence or judged more documents than necessary .\nHowever , the cost of not being able to trust the confidence estimates is higher than the cost of extra relevance judgments , so we will treat positive outcomes as `` good '' .\nThe amount we win on each pairwise comparison i is :\n0 ) .\nThe summary statistic is W , the mean of Wi .\nNote that as Pi increases , we lose more for being wrong .\nThis is as it should be : the penalty should be great for missing the high probability predictions .\nHowever , since our losses grow without bound as predictions approach certainty , we cap - Wi at 100 .\nFor our hypothesis that RTC requires fewer judgments than MTC , we are interested in the number of judgments needed to reach 95 % confidence on the first pair of systems .\nThe median is more interesting than the mean : most pairs require a few hundred judgments , but a few pairs require several thousand .\nThe distribution is therefore highly skewed , and the mean strongly affected by those outliers .\nFinally , for our hypothesis that RTC is more accurate than MTC , we will look at Kendall 's \u03c4 correlation between a ranking of k systems by a small set of judgments and the true ranking using the full set of judgments .\nKendall 's \u03c4 , a nonparametric statistic based on pairwise swaps between two lists , is a standard evaluation for this type of study .\nIt ranges from -1 ( perfectly anti-correlated ) to 1 ( rankings identical ) , with 0 meaning that half of the pairs are swapped .\nAs we touched on in the introduction , though , an accuracy measure like rank correlation is not a good evaluation of reusability .\nWe include it for completeness .\n4.4.1 Hypothesis Testing\nRunning multiple trials allows the use of statistical hypothesis testing to compare algorithms .\nUsing the same sets of systems allows the use of paired tests .\nAs we stated above , we are more interested in the median number of judgments than the mean .\nA test for difference in median is the Wilcoxon sign rank test .\nWe can also use a paired t-test to test for a difference in mean .\nFor rank correlation , we can use a paired t-test to test for a difference in \u03c4 .\n5 .\nRESULTS AND ANALYSIS\nThe comparison between MTC and RTC is shown in Table 2 .\nWith MTC and uniform probabilities of relevance , the results are far from robust .\nWe can not reuse the relevance judgments with much confidence .\nBut with RTC , the results are very robust .\nThere is a slight dip in accuracy when confidence gets above 0.95 ; nonetheless , the confidence predictions are trustworthy .\nMean Wi shows that RTC is much closer to 0 than MTC .\nThe distribution of confidence scores shows that at least 80 % confidence is achieved more than 35 % of the time , indicating that neither algorithm is being too conservative in its confidence estimates .\nThe confidence estimates are rather low overall ; that is because we have built a test collection from only two initial systems .\nRecall from Section 1 that we can not require ( or even expect ) a minimum level of confidence when we generalize to new systems .\nMore detailed results for both algorithms are shown in Figure 2 .\nThe solid line is the ideal result that would give W = 0 .\nRTC is on or above this line at all points until confidence reaches about 0.97 .\nAfter that there is a slight dip in accuracy which we discuss below .\nNote that both\nTable 2 : Confidence that P ( \u0394MAP < 0 ) and accuracy of prediction when generalizing a set of relevance judgments acquired using MTC and RTC .\nEach bin contains over 1,000 trials from the adhoc 3 , 5 -- 8 sets .\nRTC is much more robust than MTC .\nW is defined in Section 4.4 ; closer to 0 is better .\nMedian judged is the number of judgments to reach 95 % confidence on the first two systems .\nMean \u03c4 is the average rank correlation for all 10 systems .\nFigure 2 : Confidence vs. accuracy of MTC and RTC .\nThe solid line is the perfect result that would\ngive W = 0 ; performance should be on or above this line .\nEach point represents at least 500 pairwise comparisons .\nalgorithms are well above the line up to around confidence 0.7 .\nThis is because the baseline performance on these data sets is high ; it is quite easy to achieve 75 % accuracy doing very little work [ 7 ] .\nNumber of Judgments : The median number of judgments required by MTC to reach 95 % confidence on the first two systems is 251 , an average of 5 per topic .\nThe median required by RTC is 235 , about 4.7 per topic .\nAlthough the numbers are close , RTC 's median is significantly lower by a paired Wilcoxon test ( p < 0.0001 ) .\nFor comparison , a pool of depth 100 would result in a minimum of 5,000 judgments for each pair .\nThe difference in means is much greater .\nMTC required a mean of 823 judgments , 16 per topic , while RTC required a mean of 502 , 10 per topic .\n( Recall that means are strongly skewed by a few pairs that take thousands of judgments . )\nThis difference is significant by a paired t-test ( p < 0.0001 ) .\nTen percent of the sets resulted in 100 or fewer judgments ( less than two per topic ) .\nPerformance on these is very high : W = 0.41 , and 99.7 % accuracy when confidence is at least 0.9 .\nThis shows that even tiny collections can be reusable .\nFor the 50 % of sets with more than 235 judgments , accuracy is 93 % when confidence is at least 0.9 .\nRank Correlation : MTC and RTC both rank the 10 systems by EMAP ( Eq .\n( 1 ) ) calculated using their respective probability estimates .\nThe mean \u03c4 rank correlation between true MAP and EMAP is 0.393 for MTC and 0.555 for RTC .\nThis difference is significant by a paired t-test ( p < 0.0001 ) .\nNote that we do not expect the \u03c4 correlations to be high , since we are ranking the systems with so few relevance judgments .\nIt is more important that we estimate confidence in each pairwise comparison correctly .\nWe ran IP for the same number of judgments that MTC took for each pair , then ranked the systems by MAP using only those judgments ( all unjudged documents assumed nonrelevant ) .\nWe calculated the \u03c4 correlation to the true ranking .\nThe mean \u03c4 correlation is 0.398 , which is not significantly different from MTC , but is significantly lower than RTC .\nUsing uniform estimates of probability is indistinguishable from the baseline , whereas estimating relevance by expert aggregation boosts performance a great deal : nearly 40 % over both MTC and IP .\nOverfitting : It is possible to `` overfit '' : if too many judgments come from the first two systems , the variance in \u0394MAP is reduced and the confidence estimates become unreliable .\nWe saw this in Table 2 and Figure 2 where RTC exhibits a dip in accuracy when confidence is around 97 % .\nIn fact , the number of judgments made prior to a wrong prediction is over 50 % greater than the number made prior to a correct prediction .\nOverfitting is difficult to quantify exactly , because making more relevance judgments does not always cause it : at higher confidence levels , more relevance judgments are made , and as Table 2 shows , accuracy is greater at those higher confidences .\nObviously having more relevance judgments should increase both confidence and accuracy ; the difference seems to be when one system has a great deal more judgments than the other .\nPairwise Comparisons : Our pairwise comparisons fall into one of three groups :\n1 .\nthe two original runs from which relevance judgments are acquired ; 2 .\none of the original runs vs. one of the new runs ; 3 .\ntwo new runs .\nTable 3 shows confidence vs. accuracy results for each of these three groups .\nInterestingly , performance is worst when comparing one of the original runs to one of the additional runs .\nThis is most likely due to a large difference in the number of judgments affecting the variance of \u0394MAP .\nNevertheless , performance is quite good on all three subsets .\nWorst Case : The case intuitively most likely to produce an error is when the two systems being compared have retrieved very few documents in common .\nIf we want the judgments to be reusable , we should to be able to generalize even to runs that are very different from the ones used to acquire the relevance judgments .\nA simple measure of similarity of two runs is the average percentage of documents they retrieved in common for each topic [ 2 ] .\nWe calculated this for all pairs , then looked at performance on pairs with low similarity .\nResults are shown in\nTable 3 : Confidence vs. accuracy of RTC when comparing the two original runs , one original run and one new run , and two new runs .\nRTC is robust in all three cases .\nTable 4 : Confidence vs. accuracy of RTC when a\npair of systems retrieved 0 -- 30 % documents in common ( broken out into 0 % -- 10 % , 10 % -- 20 % , and 20 % -- 30 % ) .\nRTC is robust in all three cases .\nTable 4 .\nPerformance is in fact very robust even when similarity is low .\nWhen the two runs share very few documents in common , W is actually positive .\nMTC and IP both performed quite poorly in these cases .\nWhen the similarity was between 0 and 10 % , both MTC and IP correctly predicted \u0394MAP only 60 % of the time , compared to an 87.6 % success rate for RTC .\nBy Data Set : All the previous results have only been on the ad hoc collections .\nWe did the same experiments on our additional data sets , and broke out the results by data set to see how performance varies .\nThe results in Table 5 show everything about each set , including binned accuracy , W , mean \u03c4 , and median number of judgments to reach 95 % confidence on the first two systems .\nThe results are highly consistent from collection to collection , suggesting that our method is not overfitting to any particular data set .\n6 .\nCONCLUSIONS AND FUTURE WORK\nIn this work we have offered the first formal definition of the common idea of `` reusability '' of a test collection and presented a model that is able to achieve reusability with very small sets of relevance judgments .\nTable 2 and Figure 2 together show how biased a small set of judgments can be : MTC is dramatically overestimating confidence and is much less accurate than RTC , which is able to remove the bias to give a robust evaluation .\nThe confidence estimates of RTC , in addition to being accurate , provide a guide for obtaining additional judgments : focus on judging documents from the lowest-confidence comparisons .\nIn the long run , we see small sets of relevance judg\nTable 5 : Accuracy , W , mean \u03c4 , and median number of judgments for all 8 testing sets .\nThe results are highly consistent across data sets .\nments being shared by researchers , each group contributing a few more judgments to gain more confidence about their particular systems .\nAs time goes on , the number of judgments grows until there is 100 % confidence in every evaluation -- and there is a full test collection for the task .\nWe see further use for this method in scenarios such as web retrieval in which the corpus is frequently changing .\nIt could be applied to evaluation on a dynamic test collection as defined by Soboroff [ 18 ] .\nThe model we presented in Section 3 is by no means the only possibility for creating a robust test collection .\nA simpler expert aggregation model might perform as well or better ( though all our efforts to simplify failed ) .\nIn addition to expert aggregation , we could estimate probabilities by looking at similarities between documents .\nThis is an obvious area for future exploration .\nAdditionally , it will be worthwhile to investigate the issue of overfitting : the circumstances it occurs under and what can be done to prevent it .\nIn the meantime , capping confidence estimates at 95 % is a `` hack '' that solves the problem .\nWe have many more experimental results that we unfortunately did not have space for but that reinforce the notion that RTC is highly robust : with just a few judgments per topic , we can accurately assess the confidence in any pairwise comparison of systems ."} {"id": "C-27", "title": "", "abstract": "", "keyphrases": ["wireless sensor network", "local", "rang-base local", "rang-free scheme", "transmiss", "perform", "accuraci", "local error", "sensor network", "spotlight system", "local techniqu", "distribut", "event distribut", "laser"], "prmu": [], "lvl-1": "A High-Accuracy, Low-Cost Localization System for Wireless Sensor Networks Radu Stoleru, Tian He, John A. Stankovic, David Luebke Department of Computer Science University of Virginia, Charlottesville, VA 22903 {stoleru, tianhe, stankovic, luebke}@cs.\nvirginia.edu ABSTRACT The problem of localization of wireless sensor nodes has long been regarded as very difficult to solve, when considering the realities of real world environments.\nIn this paper, we formally describe, design, implement and evaluate a novel localization system, called Spotlight.\nOur system uses the spatio-temporal properties of well controlled events in the network (e.g., light), to obtain the locations of sensor nodes.\nWe demonstrate that a high accuracy in localization can be achieved without the aid of expensive hardware on the sensor nodes, as required by other localization systems.\nWe evaluate the performance of our system in deployments of Mica2 and XSM motes.\nThrough performance evaluations of a real system deployed outdoors, we obtain a 20cm localization error.\nA sensor network, with any number of nodes, deployed in a 2500m2 area, can be localized in under 10 minutes, using a device that costs less than $1000.\nTo the best of our knowledge, this is the first report of a sub-meter localization error, obtained in an outdoor environment, without equipping the wireless sensor nodes with specialized ranging hardware.\nCategories and Subject Descriptors C.2.4 [Computer-Communications Networks]: Distributed Systems; C.3 [Special-Purpose and Application-Based Systems]: Real-Time and embedded systems.\nGeneral Terms Algorithms, Measurement, Performance, Design, Experimentation 1.\nINTRODUCTION Recently, wireless sensor network systems have been used in many promising applications including military surveillance, habitat monitoring, wildlife tracking etc. [12] [22] [33] [36].\nWhile many middleware services, to support these applications, have been designed and implemented successfully, localization - finding the position of sensor nodes - remains one of the most difficult research challenges to be solved practically.\nSince most emerging applications based on networked sensor nodes require location awareness to assist their operations, such as annotating sensed data with location context, it is an indispensable requirement for a sensor node to be able to find its own location.\nMany approaches have been proposed in the literature [4] [6] [13] [14] [19] [20] [21] [23] [27] [28], however it is still not clear how these solutions can be practically and economically deployed.\nAn on-board GPS [23] is a typical high-end solution, which requires sophisticated hardware to achieve high resolution time synchronization with satellites.\nThe constraints on power and cost for tiny sensor nodes preclude this as a viable solution.\nOther solutions require per node devices that can perform ranging among neighboring nodes.\nThe difficulties of these approaches are twofold.\nFirst, under constraints of form factor and power supply, the effective ranges of such devices are very limited.\nFor example the effective range of the ultrasonic transducers used in the Cricket system is less than 2 meters when the sender and receiver are not facing each other [26].\nSecond, since most sensor nodes are static, i.e. the location is not expected to change, it is not cost-effective to equip these sensors with special circuitry just for a one-time localization.\nTo overcome these limitations, many range-free localization schemes have been proposed.\nMost of these schemes estimate the location of sensor nodes by exploiting the radio connectivity information among neighboring nodes.\nThese approaches eliminate the need of high-cost specialized hardware, at the cost of a less accurate localization.\nIn addition, the radio propagation characteristics vary over time and are environment dependent, thus imposing high calibration costs for the range-free localizations schemes.\nWith such limitations in mind, this paper addresses the following research challenge: How to reconcile the need for high accuracy in location estimation with the cost to achieve it.\nOur answer to this challenge is a localization system called Spotlight.\nThis system employs an asymmetric architecture, in which sensor nodes do not need any additional hardware, other than what they currently have.\nAll the sophisticated hardware and computation reside on a single Spotlight device.\nThe Spotlight device uses a steerable laser light source, illuminating the sensor nodes placed within a known terrain.\nWe demonstrate that this localization is much more accurate (i.e., tens of centimeters) than the range-based localization schemes and that it has a much longer effective range (i.e., thousands of meters) than the solutions based on ultra-sound/acoustic ranging.\nAt the same time, since only a single sophisticated device is needed to localize the whole network, the amortized cost is much smaller than the cost to add hardware components to the individual sensors.\n2.\nRELATED WORK In this section, we discuss prior work in localization in two major categories: the range-based localization schemes (which use either expensive, per node, ranging devices for high accuracy, or less accurate ranging solutions, as the Received Signal Strength Indicator (RSSI)), and the range-free schemes, which use only connectivity information (hop-by-hop) as an indication of proximity among the nodes.\nThe localization problem is a fundamental research problem in many domains.\nIn the field of robotics, it has been studied extensively [9] [10].\nThe reported localization errors are on the order of tens of centimeters, when using specialized ranging hardware, i.e. laser range finder or ultrasound.\nDue to the high cost and non-negligible form factor of the ranging hardware, these solutions can not be simply applied to sensor networks.\nThe RSSI has been an attractive solution for estimating the distance between the sender and the receiver.\nThe RADAR system [2] uses the RSSI to build a centralized repository of signal strengths at various positions with respect to a set of beacon nodes.\nThe location of a mobile user is estimated within a few meters.\nIn a similar approach, MoteTrack [17] distributes the reference RSSI values to the beacon nodes.\nSolutions that use RSSI and do not require beacon nodes have also been proposed [5] [14] [24] [26] [29].\nThey all share the idea of using a mobile beacon.\nThe sensor nodes that receive the beacons, apply different algorithms for inferring their location.\nIn [29], Sichitiu proposes a solution in which the nodes that receive the beacon construct, based on the RSSI value, a constraint on their position estimate.\nIn [26], Priyantha et al. propose MAL, a localization method in which a mobile node (moving strategically) assists in measuring distances between node pairs, until the constraints on distances generate a rigid graph.\nIn [24], Pathirana et al. formulate the localization problem as an on-line estimation in a nonlinear dynamic system and proposes a Robust Extended Kalman Filter for solving it.\nElnahrawy [8] provides strong evidence of inherent limitations of localization accuracy using RSSI, in indoor environments.\nA more precise ranging technique uses the time difference between a radio signal and an acoustic wave, to obtain pair wise distances between sensor nodes.\nThis approach produces smaller localization errors, at the cost of additional hardware.\nThe Cricket location-support system [25] can achieve a location granularity of tens of centimeters with short range ultrasound transceivers.\nAHLoS, proposed by Savvides et al. [27], employs Time of Arrival (ToA) ranging techniques that require extensive hardware and solving relatively large nonlinear systems of equations.\nA similar ToA technique is employed in [3].\nIn [30], Simon et al. implement a distributed system (using acoustic ranging) which locates a sniper in an urban terrain.\nAcoustic ranging for localization is also used by Kwon et al. [15].\nThe reported errors in localization vary from 2.2m to 9.5m, depending on the type (centralized vs. distributed) of the Least Square Scaling algorithm used.\nFor wireless sensor networks ranging is a difficult option.\nThe hardware cost, the energy expenditure, the form factor, the small range, all are difficult compromises, and it is hard to envision cheap, unreliable and resource-constraint devices make use of range-based localization solutions.\nHowever, the high localization accuracy, achievable by these schemes is very desirable.\nTo overcome the challenges posed by the range-based localization schemes, when applied to sensor networks, a different approach has been proposed and evaluated in the past.\nThis approach is called range-free and it attempts to obtain location information from the proximity to a set of known beacon nodes.\nBulusu et al. propose in [4] a localization scheme, called Centroid, in which each node localizes itself to the centroid of its proximate beacon nodes.\nIn [13], He et al. propose APIT, a scheme in which each node decides its position based on the possibility of being inside or outside of a triangle formed by any three beacon nodes heard by the node.\nThe Global Coordinate System [20], developed at MIT, uses apriori knowledge of the node density in the network, to estimate the average hop distance.\nThe DV-* family of localization schemes [21], uses the hop count from known beacon nodes to the nodes in the network to infer the distance.\nThe majority of range-free localization schemes have been evaluated in simulations, or controlled environments.\nSeveral studies [11] [32] [34] have emphasized the challenges that real environments pose.\nLangendoen and Reijers present a detailed, comparative study of several localization schemes in [16].\nTo the best of our knowledge, Spotlight is the first range-free localization scheme that works very well in an outdoor environment.\nOur system requires a line of sight between a single device and the sensor nodes, and the map of the terrain where the sensor field is located.\nThe Spotlight system has a long effective range (1000``s meters) and does not require any infrastructure or additional hardware for sensor nodes.\nThe Spotlight system combines the advantages and does not suffer from the disadvantages of the two localization classes.\n3.\nSPOTLIGHT SYSTEM DESIGN The main idea of the Spotlight localization system is to generate controlled events in the field where the sensor nodes were deployed.\nAn event could be, for example, the presence of light in an area.\nUsing the time when an event is perceived by a sensor node and the spatio-temporal properties of the generated events, spatial information (i.e. location) regarding the sensor node can be inferred.\nFigure 1.\nLocalization of a sensor network using the Spotlight system We envision, and depict in Figure 1, a sensor network deployment and localization scenario as follows: wireless sensor nodes are randomly deployed from an unmanned aerial vehicle.\nAfter deployment, the sensor nodes self-organize into a network and execute a time-synchronization protocol.\nAn aerial vehicle (e.g. helicopter), equipped with a device, called Spotlight, flies over the network and generates light events.\nThe sensor nodes detect the events and report back to the Spotlight device, through a base station, the timestamps when the events were detected.\nThe Spotlight device computes the location of the sensor nodes.\nDuring the design of our Spotlight system, we made the following assumptions: - the sensor network to be localized is connected and a middleware, able to forward data from the sensor nodes to the Spotlight device, is present.\n- the aerial vehicle has a very good knowledge about its position and orientation (6 parameters: 3 translation and 3 rigid-body rotation) and it possesses the map of the field where the network was deployed.\n- a powerful Spotlight device is available and it is able to generate 14 spatially large events that can be detected by the sensor nodes, even in the presence of background noise (daylight).\n- a line of sight between the Spotlight device and sensor nodes exists.\nOur assumptions are simplifying assumptions, meant to reduce the complexity of the presentation, for clarity.\nWe propose solutions that do not rely on these simplifying assumptions, in Section 6.\nIn order to formally describe and generalize the Spotlight localization system, we introduce the following definitions.\n3.1 Definitions and Problem Formulation Let``s assume that the space A \u2282R3 contains all sensor nodes N, and that each node Ni is positioned at pi(x, y, z).\nTo obtain pi(x, y, z), a Spotlight localization system needs to support three main functions, namely an Event Distribution Function (EDF) E(t), an Event Detection Function D(e), and a Localization Function L(Ti).\nThey are formally defined as follows: Definition 1: An event e(t, p) is a detectable phenomenon that occurs at time t and at point p \u0454 A. Examples of events are light, heat, smoke, sound, etc..\nLet Ti={ti1, ti2, ..., tin} be a set of n timestamps of events detected by a node i. Let T''={t1'', t2'', ..., tm''} be the set of m timestamps of events generated in the sensor field.\nDefinition 2: The Event Detection Function D(e) defines a binary detection algorithm.\nFor a given event e: \u23a9 \u23a8 \u23a7 = detectednotisEventfalse, detectedisEventtrue, )( e e eD (1) Definition 3: The Event Distribution Function (EDF) E(t) defines the point distribution of events within A at time t: }{ truepteDApptE =\u2227\u2208= )),((|)( (2) Definition 4: The Localization Function L(Ti) defines a localization algorithm with input Ti, a sequence of timestamps of events detected by the node i: I iTt i tETL \u2208 = )()( (3) Figure 2.\nSpotlight system architecture As shown in Figure 2, the Event Detection Function D(e) is supported by the sensor nodes.\nIt is used to determine whether an external event happens or not.\nIt can be implemented through either a simple threshold-based detection algorithm or other advanced digital signal processing techniques.\nThe Event Distribution E(t) and Localization Functions L(Ti) are implemented by a Spotlight device.\nThe Localization function is an aggregation algorithm which calculates the intersection of multiple sets of points.\nThe Event Distribution Function E(t) describes the distribution of events over time.\nIt is the core of the Spotlight system and it is much more sophisticated than the other two functions.\nDue to the fact that E(t) is realized by the Spotlight device, the hardware requirements for the sensor nodes remain minimal.\nWith the support of these three functions, the localization process goes as follows: 1) A Spotlight device distributes events in the space A over a period of time.\n2) During the event distribution, sensor nodes record the time sequence Ti = {ti1, ti2, ..., tin} at which they detect the events.\n3) After the event distribution, each sensor node sends the detection time sequence back to the Spotlight device.\n4) The Spotlight device estimates the location of a sensor node i, using the time sequence Ti and the known E(t) function.\nThe Event Distribution Function E(t) is the core technique used in the Spotlight system and we propose three designs for it.\nThese designs have different tradeoffs and the cost comparison is presented in Section 3.5.\n3.2 Point Scan Event Distribution Function To illustrate the basic functionality of a Spotlight system, we start with a simple sensor system where a set of nodes are placed along a straight line (A = [0, l] R).\nThe Spotlight device generates point events (e.g. light spots) along this line with constant speed s.\nThe set of timestamps of events detected by a node i is Ti={ti1}.\nThe Event Distribution Function E(t) is: \u2282 }{ stpApptE *)( =\u2227\u2208= (4) where t \u2208[0, l/s].\nThe resulting localization function is: }{ sttETL iii \u2217== 11 )()( (5) where D(e(ti1, pi)) = true for node i positioned at pi.\nThe implementation of the Event Distribution Function E(t) is straightforward.\nAs shown in Figure 3(a), when a light source emits a beam of light with the angular speed given by d s dt d S )(cos 2 \u03b1\u03b1 \u03b1 == , a light spot event with constant speed s is generated along the line situated at distance d. Figure 3.\nThe implementation of the Point Scan EDF The Point Scan EDF can be generalized to the case where nodes are placed in a two dimensional plane R2 .\nIn this case, the Spotlight system progressively scans the plane to activate the sensor nodes.\nThis scenario is depicted in Figure 3(b).\n3.3 Line Scan Event Distribution Function Some devices, e.g. diode lasers, can generate an entire line of events simultaneously.\nWith these devices, we can support the Line Scan Event Distributed Function easily.\nWe assume that the 15 sensor nodes are placed in a two dimensional plane (A=[l x l] \u2282R2 ) and that the scanning speed is s.\nThe set of timestamps of events detected by a node i is Ti={ti1, ti2}.\nFigure 4.\nThe implementation of the Line Scan EDF The Line Scan EDF is defined as follows: ( ){ ks,*tpl][0,kp(t)E kkx =\u2227\u2208= } for t \u2208[0, l/s] and: ({ ls*tk,pl][0,kp(t)E kky \u2212=\u2227\u2208= )} (6) for t \u2208[ l/s, 2l/s].\nU )()()( tEtEtE yx= We can localize a node by calculating the intersection of the two event lines, as shown in Figure 4.\nMore formally: I )()()( 21 iii tEtETL = (7) where D(e(ti1, pi)) = true, D(e(ti2, pi)) = true for node i positioned at pi.\n3.4 Area Cover Event Distribution Function Other devices, such as light projectors, can generate events that cover an area.\nThis allows the implementation of the Area Cover EDF.\nThe idea of Area Cover EDF is to partition the space A into multiple sections and assign a unique binary identifier, called code, to each section.\nLet``s suppose that the localization is done within a plane (A R2 ).\nEach section Sk within A has a unique code k.\nThe Area Cover EDF is then defined as follows: \u2282 \u23a9 \u23a8 \u23a7 = 0iskofbitjthiffalse, 1iskofbitjthiftrue, ),( jkBIT (8) }{ truetkBITSpptE k =\u2227\u2208= ),()( and the corresponding localization algorithm is: { \u2227\u2208=\u2227== )),(()(|)( iki TtiftruetkBITSCOGppTL (9)})`),(( iTTtiffalsetkBIT \u2212\u2208= where COG(Sk) denotes the center of gravity of Sk.\nWe illustrate the Area Cover EDF with a simple example.\nAs shown in Figure 5, the plane A is divided in 16 sections.\nEach section Sk has a unique code k.\nThe Spotlight device distributes the events according to these codes: at time j a section Sk is covered by an event (lit by light), if jth bit of k is 1.\nA node residing anywhere in the section Sk is localized at the center of gravity of that section.\nFor example, nodes within section 1010 detect the events at time T = {1, 3}.\nAt t = 4 the section where each node resides can be determined A more accurate localization requires a finer partitioning of the plane, hence the number of bits in the code will increase.\nConsidering the noise that is present in a real, outdoor environment, it is easy to observe that a relatively small error in detecting the correct bit pattern could result in a large localization error.\nReturning to the example shown in Figure 5, if a sensor node is located in the section with code 0000, and due to the noise, at time t = 3, it thinks it detected an event, it will incorrectly conclude that its code is 1000, and it positions itself two squares below its correct position.\nThe localization accuracy can deteriorate even further, if multiple errors are present in the transmission of the code.\nA natural solution to this problem is to use error-correcting codes, which greatly reduce the probability of an error, without paying the price of a re-transmission, or lengthening the transmission time too much.\nSeveral error correction schemes have been proposed in the past.\nTwo of the most notable ones are the Hamming (7, 4) code and the Golay (23, 12) code.\nBoth are perfect linear error correcting codes.\nThe Hamming coding scheme can detect up to 2-bit errors and correct 1-bit errors.\nIn the Hamming (7, 4) scheme, a message having 4 bits of data (e.g. dddd, where d is a data bit) is transmitted as a 7-bit word by adding 3 error control bits (e.g. dddpdpp, where p is a parity bit).\nFigure 5.\nThe steps of Area Cover EDF.\nThe events cover the shaded areas.\nThe steps of the Area Cover technique, when using Hamming (7, 4) scheme are shown in Figure 6.\nGolay codes can detect up to 6-bit errors and correct up to 3-bit errors.\nSimilar to Hamming (7, 4), Golay constructs a 23-bit codeword from 12-bit data.\nGolay codes have been used in satellite and spacecraft data transmission and are most suitable in cases where short codeword lengths are desirable.\nFigure 6.\nThe steps of Area Cover EDF with Hamming (7, 4) ECC.\nThe events cover the shaded areas.\nLet``s assume a 1-bit error probability of 0.01, and a 12-bit message that needs to be transmitted.\nThe probability of a failed transmission is thus: 0.11, if no error detection and correction is used; 0.0061 for the Hamming scheme (i.e. more than 1-bit error); and 0.000076 for the Golay scheme (i.e. more than 3-bit errors).\nGolay is thus 80 times more robust that the Hamming scheme, which is 20 times more robust than the no error correction scheme.\n16 Considering that a limited number of corrections is possible by any coding scheme, a natural question arises: can we minimize the localization error when there are errors that can not be corrected?\nThis can be achieved by a clever placement of codes in the grid.\nAs shown in Figure 7, the placement A, in the presence of a 1-bit error has a smaller average localization error when compared to the placement B.\nThe objective of our code placement strategy is to reduce the total Euclidean distance between all pairs of codes with Hamming distances smaller than K, the largest number of expected 1-bit errors.\nFigure 7.\nDifferent code placement strategies Formally, a placement is represented by a function P: [0, l]d \u2192 C, which assigns a code to every coordinate in the d-dimensional cube of size l (e.g., in the planar case, we place codes in a 2dimensional grid).\nWe denote by dE(i, j) the Euclidean distance and by dH(i, j) the Hamming distance between two codes i and j.\nIn a noisy environment, dH(i,j) determines the crossover probability between the two codes.\nFor the case of independent detections, the higher dH(i, j) is, the lower the crossover probability will be.\nThe objective function is defined as follows: d Kjid E ljiwherejid H ],0[,}),(min{ ),( \u2208\u2211\u2264 (10) Equation 10 is a non-linear and non-convex programming problem.\nIn general, it is analytically hard to obtain the global minimum.\nTo overcome this, we propose a Greedy Placement method to obtain suboptimal results.\nIn this method we initialize the 2-dimensional grid with codes.\nThen we swap the codes within the grid repeatedly, to minimize the objective function.\nFor each swap, we greedily chose a pair of codes, which can reduce the objective function (Equation 10) the most.\nThe proposed Greedy Placement method ends when no swap of codes can further minimize the objective function.\nFor evaluation, we compared the average localization error in the presence of K-bit error for two strategies: the proposed Greedy Placement and the Row-Major Placement (it places the codes consecutively in the array, in row-first order).\n0 0.5 1 1.5 2 2.5 3 3.5 4 4 9 16 25 36 49 64 81 Grid Size LocalizationError[gridunit] Row-major Consecutive placement Greedy Placement Figure 8.\nLocalization error with code placement and no ECC As Figure 8 shows, if no error detection/correction capability is present and 1-bit errors occur, then our Greedy Placement method can reduce the localization error by an average 23%, when compared to the Row-Major Placement.\nIf error detection and correction schemes are used (e.g. Hamming (12, 8) and if 3-bit errors occur (K=3) then the Greedy Placement method reduces localization error by 12%, when compared to the Row-Major Placement, as shown in Figure 9.\nIf K=1, then there is no benefit in using the Greedy Placement method, since the 1-bit error can be corrected by the Hamming scheme.\n0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 4 9 16 25 36 49 64 81 Grid Size LocalizationError[gridunit] Row-major Consecutive placement Greedy Placement Figure 9.\nLocalization error with code placement and Hamming ECC 3.5 Event Distribution Function Analysis Although all three aforementioned techniques are able to localize the sensor nodes, they differ in the localization time, communication overhead and energy consumed by the Event Distribution Function (let``s call it Event Overhead).\nLet``s assume that all sensor nodes are located in a square with edge size D, and that the Spotlight device can generate N events (e.g. Point, Line and Area Cover events) every second and that the maximum tolerable localization error is r. Table 1 presents the execution cost comparison of the three different Spotlight techniques.\nTable 1.\nExecution Cost Comparison Criterion Point Scan Line Scan Area Cover Localization Time NrD /)/( 22 NrD /)/2( NDlogr / # Detections 1 2 logrD # Time Stamps 1 2 logrD Event Overhead D2 2D2 D2 logrD/2 Table 1 indicates that the Event Overhead for the Point Scan method is the smallest - it requires a one-time coverage of the area, hence the D2 .\nHowever the Point Scan takes a much longer time than the Area Cover technique, which finishes in logrD seconds.\nThe Line Scan method trades the Event Overhead well with the localization time.\nBy doubling the Event Overhead, the Line Scan method takes only r/2D percentage of time to complete, when compared with the Point Scan method.\nFrom Table 1, it can be observed that the execution costs do not depend on the number of sensor nodes to be localized.\nIt is important to remark the ratio Event Overhead per unit time, which is indicative of the power requirement for the Spotlight device.\nThis ratio is constant for the Point Scan (r2 *N) while it grows linearly with area, for the Area Cover (D2 *N/2).\nIf the deployment area is very large, the use of the Area Cover EDF is prohibitively expensive, if not impossible.\nFor practical purposes, the Area Cover is a viable solution for small to medium size networks, while the Line Scan works well for large networks.\nWe discuss the implications of the power requirement for the Spotlight device, and offer a hybrid solution in Section 6.\n3.6 Localization Error Analysis The accuracy of localization with the Spotlight technique depends on many aspects.\nThe major factors that were considered during the implementation of the system are discussed below: 17 - Time Synchronization: the Spotlight system exchanges time stamps between sensor nodes and the Spotlight device.\nIt is necessary for the system to reach consensus on global time through synchronization.\nDue to the uncertainty in hardware processing and wireless communication, we can only confine such errors within certain bounds (e.g. one jiffy).\nAn imprecise input to the Localization Function L(T) leads to an error in node localization.\n- Uncertainty in Detection: the sampling rate of the sensor nodes is finite, consequently, there will be an unpredictable delay between the time when an event is truly present and when the sensor node detects it.\nLower sampling rates will generate larger localizations errors.\n- Size of the Event: the events distributed by the Spotlight device can not be infinitely small.\nIf a node detects one event, it is hard for it to estimate the exact location of itself within the event.\n- Realization of Event Distribution Function: EDF defines locations of events at time t. Due to the limited accuracy (e.g. mechanical imprecision), a Spotlight device might generate events which locate differently from where these events are supposed to be.\nIt is important to remark that the localization error is independent of the number of sensor nodes in the network.\nThis independence, as well as the aforementioned independence of the execution cost, indicate the very good scalability properties (with the number of sensor nodes, but not with the area of deployment) that the Spotlight system possesses.\n4.\nSYSTEM IMPLEMENTATION For our performance evaluation we implemented two Spotlight systems.\nUsing these two implementations we were able to investigate the full spectrum of Event Distribution techniques, proposed in Section 3, at a reduced one time cost (less than $1,000).\nThe first implementation, called \u03bcSpotlight, had a short range (10-20 meters), however its capability of generating the entire spectrum of EDFs made it very useful.\nWe used this implementation mainly to investigate the capabilities of the Spotlight system and tune its performance.\nIt was not intended to represent the full solution, but only a scaled down version of the system.\nThe second implementation, the Spotlight system, had a much longer range (as far as 6500m), but it was limited in the types of EDFs that it can generate.\nThe goal of this implementation was to show how the Spotlight system works in a real, outdoor environment, and show correlations with the experimental results obtained from the \u03bcSpotlight system implementation.\nIn the remaining part of this section, we describe how we implemented the three components (Event Distribution, Event Detection and Localization functions) of the Spotlight architecture, and the time synchronization protocol, a key component of our system.\n4.1 \u00b5Spotlight System The first system we built, called \u03bcSpotlight, used as the Spotlight device, an Infocus LD530 projector connected to an IBM Thinkpad laptop.\nThe system is shown in Figure 10.\nThe Event Distribution Function was implemented as a Java GUI.\nDue to the stringent timing requirements and the delay caused by the buffering in the windowing system of a PC, we used the Full-Screen Exclusive Mode API provided by Java2.\nThis allowed us to bypass the windowing system and more precisely estimate the time when an event is displayed by the projector, hence a higher accuracy of timestamps of events.\nBecause of the 50Hz refresh rate of our projector, there was still an uncertainty in the time stamping of the events of 20msec.\nWe explored the possibility of using and modifying the Linux kernel to expose the vertical synch (VSYNCH) interrupt, generated by the displaying device after each screen refresh, out of the kernel mode.\nThe performance evaluation results showed, however, that this level of accuracy was not needed.\nThe sensor nodes that we used were Berkeley Mica2 motes equipped with MTS310 multi-sensor boards from Crossbow.\nThis sensor board contains a CdSe photo sensor which can detect the light from the projector.\nFigure 10.\n\u03bcSpotlight system implementation With this implementation of the Spotlight system, we were able to generate Point, Line and Area Scan events.\n4.2 Spotlight System The second Spotlight system we built used, as the Spotlight device, diode lasers, a computerized telescope mount (Celestron CG-5GT, shown in Figure 11), and an IBM Thinkpad laptop.\nThe laptop was connected, through RS232 interfaces, to the telescope mount and to one XSM600CA [7] mote, acting as a base station.\nThe diode lasers we used ranged in power from 7mW to 35mW.\nThey emitted at 650nm, close to the point of highest sensitivity for CdSe photosensor.\nThe diode lasers were equipped with lenses that allowed us to control the divergence of the beam.\nFigure 11.\nSpotlight system implementation The telescope mount has worm gears for a smooth motion and high precision angular measurements.\nThe two angular measures that we used were the, so called, Alt (from Altitude) and Az (from Azimuth).\nIn astronomy, the Altitude of a celestial object is its angular distance above or below the celestial horizon, and the Azimuth is the angular distance of an object eastwards of the meridian, along the horizon.\n18 The laptop computer, through a Java GUI, controls the motion of the telescope mount, orienting it such that a full Point Scan of an area is performed, similar to the one described in Figure 3(b).\nFor each turning point i, the 3-tuple (Alti and Azi angles and the timestamp ti) is recorded.\nThe Spotlight system uses the timestamp received from a sensor node j, to obtain the angular measures Altj and Azj for its location.\nFor the sensor nodes, we used XSM motes, mainly because of their longer communication range.\nThe XSM mote has the photo sensor embedded in its main board.\nWe had to make minor adjustments to the plastic housing, in order to expose the photo sensor to the outside.\nThe same mote code, written in nesC, for TinyOS, was used for both \u00b5Spotlight and Spotlight system implementations.\n4.3 Event Detection Function D(t) The Event Detection Function aims to detect the beginning of an event and record the time when the event was observed.\nWe implemented a very simple detection function based on the observed maximum value.\nAn event i will be time stamped with time ti, if the reading from the photo sensor dti, fulfills the condition: itdd <\u0394+max where dmax is the maximum value reported by the photo sensor before ti and \u0394 is a constant which ensures that the first large detection gives the timestamp of the event (i.e. small variations around the first large signal are not considered).\nHence \u0394 guarantees that only sharp changes in the detected value generate an observed event.\n4.4 Localization Function L(T) The Localization Function is implemented in the Java GUI.\nIt matches the timestamps created by the Event Distribution Function with those reported by the sensor nodes.\nThe Localization Function for the Point Scan EDF has as input a time sequence Ti = {t1}, as reported by node i.\nThe function performs a simple search for the event with a timestamp closest to t1.\nIf t1 is constrained by: 11 + << nn ee ttt where en and en+1 are two consecutive events, then the obtained location for node i is: 11 , ++ == nn ee yyxx The case for the Line Scan is treated similarly.\nThe input to the Localization Function is the time sequence Ti = {t1, t2} as reported by node i.\nIf the reported timestamps are constrained by: 11 + << nn ee ttt , and 12 + << mm ee ttt where en and en+1 are two consecutive events on the horizontal scan and em and em+1 are two consecutive events on vertical scan, then the inferred location for node i is: 11 , ++ == mn ee yyxx The Localization Function for the Area Cover EDF has as input a timestamp set Ti={ti1, ti2, ..., tin} of the n events, detected by node i.\nWe recall the notation for the set of m timestamps of events generated by the Spotlight device, T''={t1'', t2'', ..., tm''}.\nA code di=di1di2...dim is then constructed for each node i, such that dij=1 if tj'' \u2208Ti and dij=0 if tj'' \u2209 Ti.\nThe function performs a search for an event with an identical code.\nIf the following condition is true: nei dd = where en is an event with code den, then the inferred location for node i is: nn ee yyxx == , 4.5 Time Synchronization The time synchronization in the Spotlight system consists of two parts: - Synchronization between sensor nodes: This is achieved through the Flooding Time Synchronization Protocol [18].\nIn this protocol, synchronized nodes (the root node is the only synchronized node at the beginning) send time synchronization message to unsynchronized nodes.\nThe sender puts the time stamp into the synchronization message right before the bytes containing the time stamp are transmitted.\nOnce a receiver gets the message, it follows the sender's time and performs the necessary calculations to compensate for the clock drift.\n- Synchronization between the sensor nodes and the Spotlight device: We implemented this part through a two-way handshaking between the Spotlight device and one node, used as the base station.\nThe sensor node is attached to the Spotlight device through a serial interface.\nFigure 12.\nTwo-way synchronization As shown in Figure 12, let``s assume that the Spotlight device sends a synchronization message (SYNC) at local time T1, the sensor node receives it at its local time T2 and acknowledges it at local time T3 (both T2 and T3 are sent back through ACK).\nAfter the Spotlight device receives the ACK, at its local time T4, the time synchronization can be achieved as follows: 2 )()( 4312 TTTT Offset \u2212+\u2212 = (11) OffsetTTT spotlightnodeglobal +== We note that Equation 11 assumes that the one trip delays are the same in both directions.\nIn practice this does not hold well enough.\nTo improve the performance, we separate the handshaking process from the timestamp exchanges.\nThe handshaking is done fast, through a 2 byte exchange between the Spotlight device and the sensor node (the timestamps are still recorded, but not sent).\nAfter this fast handshaking, the recorded time stamps are exchanged.\nThe result indicates that this approach can significantly improve the accuracy of time synchronization.\n5.\nPERFORMANCE EVALUATION In this section we present the performance evaluation of the Spotlight systems when using the three event distribution functions, i.e. Point Scan, Line Scan and Area Cover, described in Section 3.\n19 For the \u00b5Spotlight system we used 10 Mica2 motes.\nThe sensor nodes were attached to a vertically positioned Veltex board.\nBy projecting the light to the sensor nodes, we are able to generate well controlled Point, Line and Area events.\nThe Spotlight device was able to generate events, i.e. project light patterns, covering an area of approximate size 180x140cm2 .\nThe screen resolution for the projector was 1024x768, and the movement of the Point Scan and Line Scan techniques was done through increments (in the appropriate direction) of 10 pixels between events.\nEach experimental point was obtained from 10 successive runs of the localization procedure.\nEach set of 10 runs was preceded by a calibration phase, aimed at estimating the total delays (between the Spotlight device and each sensor node) in detecting an event.\nDuring the calibration, we created an event covering the entire sensor field (illuminated the entire area).\nThe timestamp reported by each sensor node, in conjunction with the timestamp created by the Spotlight device were used to obtain the time offset, for each sensor node.\nMore sophisticated calibration procedures have been reported previously [35].\nIn addition to the time offset, we added a manually configurable parameter, called bias.\nIt was used to best estimate the center of an event.\nFigure 13.\nDeployment site for the Spotlight system For the Spotlight system evaluation, we deployed 10 XSM motes in a football field.\nThe site is shown in Figure 13 (laser beams are depicted with red arrows and sensor nodes with white dots).\nTwo sets of experiments were run, with the Spotlight device positioned at 46m and at 170m from the sensor field.\nThe sensor nodes were aligned and the Spotlight device executed a Point Scan.\nThe localization system computed the coordinates of the sensor nodes, and the Spotlight device was oriented, through a GoTo command sent to the telescope mount, towards the computed location.\nIn the initial stages of the experiments, we manually measured the localization error.\nFor our experimental evaluation, the metrics of interest were as follows: - Localization error, defined as the distance, between the real location and the one obtained from the Spotlight system.\n- Localization duration, defined as the time span between the first and last event.\n- Localization range, defined as the maximum distance between the Spotlight device and the sensor nodes.\n- A Localization Cost function Cost:{{localization accuracy}, {localization duration}} \u2192 [0,1] quantifies the trade-off between the accuracy in localization and the localization duration.\nThe objective is to minimize the Localization Cost function.\nBy denoting with ei the localization error for the ith scenario, with di the localization duration for the ith scenario, with max(e) the maximum localization error, with max(d) the maximum localization duration, and with \u03b1 the importance factor, the Localization Cost function is formally defined as: )max( )1( )max( ),( d d e e deCost ii ii \u2217\u2212+\u2217= \u03b1\u03b1 - Localization Bias.\nThis metric is used to investigate the effectiveness of the calibration procedure.\nIf, for example, all computed locations have a bias in the west direction, a calibration factor can be used to compensate for the difference.\nThe parameters that we varied during the performance evaluation of our system were: the type of scanning (Point, Line and Area), the size of the event, the duration of the event (for Area Cover), the scanning speed, the power of the laser and the distance between the Spotlight device and sensor field, to estimate the range of the system.\n5.1 Point Scan - \u03bcSpotlight system In this experiment, we investigated how the size of the event and the scanning speed affect the localization error.\nFigure 14 shows the mean localization errors with their standard deviations.\nIt can be observed, that while the scanning speed, varying between 35cm/sec and 87cm/sec has a minor influence on the localization accuracy, the size of the event has a dramatic effect.\n0 2 4 6 8 10 12 14 7.0 10.5 14.0 17.5 21.0 24.5 Event Size [cm] Locationerror[cm] 87cm/sec 58cm/sec 43cm/sec 35cm/sec Figure 14.\nLocalization Error vs. Event Size for the Point Scan EDF The obtained localization error varied from as little as 2cm to over 11cm for the largest event.\nThis dependence can be explained by our Event Detection algorithm: the first detection above a threshold gave the timestamp for the event.\nThe duration of the localization scheme is shown in Figure 15.\nThe dependency of the localization duration on the size of the event and scanning speed is natural.\nA bigger event allows a reduction in the total duration of up to 70%.\nThe localization duration is directly proportional to the scanning speed, as expected, and depicted in Figure 15.\n0 20 40 60 80 100 120 7.0 10.5 14.0 17.5 21.0 24.5 Event Size [cm] LocalizationDuration[sec] 87cm/sec 58cm/sec 43cm/sec 35cm/sec Figure 15.\nLocalization Duration vs. Event Size for the Point Scan EDF 20 An interesting trade-off is between the localization accuracy (usually the most important factor), and the localization time (important in environments where stealthiness is paramount).\nFigure 16 shows the Localization Cost function, for \u03b1 = 0.5 (accuracy and duration are equally important).\nAs shown in Figure 16, it can be observed that an event size of approximately 10-15cm (depending on the scanning speed) minimizes our Cost function.\nFor \u03b1 = 1, the same graph would be a monotonically increasing function, while for \u03b1 = 0, it would be monotonically decreasing function.\n0.40 0.45 0.50 0.55 0.60 0.65 0.70 5.0 10.0 15.0 20.0 25.0 30.0 Event Size [cm] LocalizationCost[%] 87cm/sec 58cm/sec 43cm/sec 35cm/sec Figure 16.\nLocalization Cost vs. Event Size for the Point Scan EDF 5.2 Line Scan - \u03bcSpotlight system In a similar manner to the Point Scan EDF, for the Line Scan EDF we were interested in the dependency of the localization error and duration on the size of the event and scanning speed.\nWe represent in Figure 17 the localization error for different event sizes.\nIt is interesting to observe the dependency (concave shape) of the localization error vs. the event size.\nMoreover, a question that should arise is why the same dependency was not observed in the case of Point Scan EDF.\n0 1 2 3 4 5 6 7 8 9 10 1.7 3.5 7.0 10.5 14.0 17.5 Event Size [cm] Locationerror[cm] 87cm/sec 58cm/sec 43cm/sec 35cm/sec Figure 17.\nLocalization Error vs. Event Size for the Line Scan EDF The explanation for this concave dependency is the existence of a bias in location estimation.\nAs a reminder, a bias factor was introduced in order to best estimate the central point of events that have a large size.\nWhat Figure 17 shows is the fact that the bias factor was optimal for an event size of approximately 7cm.\nFor events smaller and larger than this, the bias factor was too large, and too small, respectively.\nThus, it introduced biased errors in the position estimation.\nThe reason why we did not observe the same dependency in the case of the Point Scan EDF was that we did not experiment with event sizes below 7cm, due to the long time it would have taken to scan the entire field with events as small as 1.7cm.\nThe results for the localization duration as a function of the size of the event are shown in Figure 18.\nAs shown, the localization duration is directly proportional to the scanning speed.\nThe size of the event has a smaller influence on the localization duration.\nOne can remark the average localization duration of about 10sec, much shorter then the duration obtained in the Point Scan experiment.\nThe Localization Cost function dependency on the event size and scanning speed, for \u03b1=0.5, is shown in Figure 19.\nThe dependency on the scanning speed is very small (the Cost Function achieves a minimum in the same 4-6cm range).\nIt is interesting to note that this 4-6cm optimal event size is smaller than the one observed in the case of Point Scan EDF.\nThe explanation for this is that the smaller localization duration observed in the Line Scan EDF, allowed a shift (towards smaller event sizes) in the total Localization Cost Function.\n0 5 10 15 20 1.7 3.5 7.0 10.5 14.0 17.5 Event Size [cm] LocalizationDuration[sec] 87cm/sec 58cm/sec 43cm/sec 35cm/sec Figure 18.\nLocalization Duration vs. Event Size for the Line Scan EDF 0.50 0.55 0.60 0.65 0.70 0.75 0.80 1.0 3.0 5.0 7.0 9.0 11.0 Event Size [cm] LocalizationCost[%] 87cm/sec 58cm/sec 43cm/sec 35cm/sec Figure 19.\nCost Function vs. Event Size for the Line Scan EDF During our experiments with the Line Scan EDF, we observed evidence of a bias in location estimation.\nThe estimated locations for all sensor nodes exhibited different biases, for different event sizes.\nFor example, for an event size of 17.5cm, the estimated location for sensor nodes was to the upper-left size of the actual location.\nThis was equivalent to an early detection, since our scanning was done from left to right and from top to bottom.\nThe scanning speed did not influence the bias.\nIn order to better understand the observed phenomena, we analyzed our data.\nFigure 20 shows the bias in the horizontal direction, for different event sizes (the vertical bias was almost identical, and we omit it, due to space constraints).\nFrom Figure 20, one can observe that the smallest observed bias, and hence the most accurate positioning, was for an event of size 7cm.\nThese results are consistent with the observed localization error, shown in Figure 17.\nWe also adjusted the measured localization error (shown in Figure 17) for the observed bias (shown in Figure 20).\nThe results of an ideal case of Spotlight Localization system with Line Scan EDF are shown in Figure 21.\nThe errors are remarkably small, varying between 0.1cm and 0.8cm, with a general trend of higher localization errors for larger event sizes.\n21 -6 -5 -4 -3 -2 -1 0 1 2 3 1.7 3.5 7.0 10.5 14.0 17.5 Event Size [cm] HorizontalBias[cm] 87cm/sec 58cm/sec 43cm/sec 35cm/sec Figure 20.\nPosition Estimation Bias for the Line Scan EDF 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.7 3.5 7.0 10.5 14.0 17.5 Event Size [cm] LocalizationErrorw/oBias[cm] 87cm/sec 58cm/sec 43cm/sec 35cm/sec Figure 21.\nPosition Estimation w/o Bias (ideal), for the Line Scan EDF 5.3 Area Cover - \u03bcSpotlight system In this experiment, we investigated how the number of bits used to quantify the entire sensor field, affected the localization accuracy.\nIn our first experiment we did not use error correcting codes.\nThe results are shown in Figure 22.\n0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 6 8 10 12 Number of Bits Locationerror[cm] 20ms/event 40ms/event 60ms/event 80ms/event 100ms/event Figure 22.\nLocalization Error vs. Event Size for the Area Cover EDF One can observe a remarkable accuracy, with localization error on the order of 0.3-0.6cm.\nWhat is important to observe is the variance in the localization error.\nIn the scenario where 12 bits were used, while the average error was very small, there were a couple of cases, where an incorrect event detection generated a larger than expected error.\nAn example of how this error can occur was described in Section 3.4.\nThe experimental results, presented in Figure 22, emphasize the need for error correction of the bit patterns observed and reported by the sensor nodes.\nThe localization duration results are shown in Figure 23.\nIt can be observed that the duration is directly proportional with the number of bits used, with total durations ranging from 3sec, for the least accurate method, to 6-7sec for the most accurate.\nThe duration of an event had a small influence on the total localization time, when considering the same scenario (same number of bits for the code).\nThe Cost Function dependency on the number of bits in the code, for \u03b1=0.5, is shown in Figure 24.\nGenerally, since the localization duration for the Area Scan can be extremely small, a higher accuracy in the localization is desired.\nWhile the Cost function achieves a minimum when 10 bits are used, we attribute the slight increase observed when 12 bits were used to the two 12bit scenarios where larger than the expected errors were observed, namely 6-7mm (as shown in Figure 22).\n0 1 2 3 4 5 6 7 8 9 10 6 8 10 12 Number of Bits LocalizationDuration[sec] 20ms/event 40ms/event 60ms/event 80ms/event 100ms/event Figure 23.\nLocalization Duration vs. Event Size for the Area Cover EDF 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 4 6 8 10 12 14 Number of Bits CostFunction[%] 20ms/event 40ms/event 60ms/event 80ms/event 100ms/event Figure 24.\nCost Function vs. Event Size for the Area Cover EDF -0.4 -0.1 0.2 0.5 0.8 1.1 1.4 20 40 60 80 100 Event Duration [ms/event] Locationerror[cm] w/o ECC w/ ECC Figure 25.\nLocalization Error w/ and w/o Error Correction The two problematic scenarios (shown in Figure 22, where for 12-bit codes we observed errors larger than the event size, due to errors in detection) were further explored by using error correction codes.\nAs described in Section 3.3, we implemented an extended Golay (24, 12) error correction mechanism in our location estimation algorithm.\nThe experimental results are depicted in Figure 25, and show a consistent accuracy.\nThe scenario without error correction codes, is simply the same 12-bit code scenario, shown in Figure 22.\nWe only investigated the 12-bit scenario, due to its match with the 12bit data required by the Golay encoding scheme (extended Golay producing 24-bit codewords).\n22 5.4 Point Scan - Spotlight system In this section we describe the experiments performed at a football stadium, using our Spotlight system.\nThe hardware that we had available allowed us to evaluate the Point Scan technique of the Spotlight system.\nIn our evaluation, we were interested to see the performance of the system at different ranges.\nFigures 26 and 27 show the localization error versus the event size at two different ranges: 46m and 170m.\nFigure 26 shows a remarkable accuracy in localization.\nThe errors are in the centimeter range.\nOur initial, manual measurements of the localization error were most of the time difficult to make, since the spot of the laser was almost perfectly covering the XSM mote.\nWe are able to achieve localization errors of a few centimeters, which only range-based localization schemes are able to achieve [25].\nThe observed dependency on the size of the event is similar to the one observed in the \u03bcSpotlight system evaluation, and shown in Figure 14.\nThis proved that the \u03bcSpotlight system is a viable alternative for investigating complex EDFs, without incurring the costs for the necessary hardware.\n0 5 10 15 20 25 0 5 10 15 20 25 30 Event Size [cm] LocalizationError[cm] 0.41m/sec 0.81m/sec 1.7m/sec Figure 26.\nLocalization Error vs. Event Size for Spotlight system at 46m In the experiments performed over a much longer distance between the Spotlight device and sensor network, the average localization error remains very small.\nLocalization errors of 510cm were measured, as Figure 27 shows.\nWe were simply amazed by the accuracy that the system is capable of, when considering that the Spotlight system operated over the length of a football stadium.\nThroughout our experimentation with the Spotlight system, we have observed localization errors that were simply offsets of real locations.\nSince the same phenomenon was observed when experimenting with the \u03bcSpotlight system, we believe that with auto-calibration, the localization error can be further reduced.\n0 5 10 15 20 25 6 12 18 Event Size [cm] LocalizationError[cm] 0.7m/sec 1.4m/sec 3m/sec Figure 27.\nLocalization Error vs. Event Size for Spotlight system at 170m The time required for localization using the Spotlight system with a Point Scan EDF, is given by: t=(L*l)/(s*Es), where L and l are the dimensions of the sensor network field, s is the scanning speed, and Es is the size of the event.\nFigure 28 shows the time for localizing a sensor network deployed in an area of size of a football field using the Spotlight system.\nHere we ignore the message propagation time, from the sensor nodes to the Spotlight device.\nFrom Figure 28 it can be observed that the very small localization errors are prohibitively expensive in the case of the Point Scan.\nWhen localization errors of up to 1m are tolerable, localization duration can be as low as 4 minutes.\nLocalization durations of 5-10 minutes, and localization errors of 1m are currently state of art in the realm of range-free localization schemes.\nAnd these results are achieved by using the Point Scan scheme, which required the highest Localization Time, as it was shown in Table 1.\n0 5 10 15 20 25 30 35 40 0 25 50\u00a075\u00a0100\u00a0125 150 Event Size [cm] LocalizationTime[min] 3m/sec 6m/sec 9m/sec Figure 28.\nLocalization Time vs. Event Size for Spotlight system One important characteristic of the Spotlight system is its range.\nThe two most important factors are the sensitivity of the photosensor and the power of the Spotlight source.\nWe were interested in measuring the range of our Spotlight system, considering our capabilities (MTS310 sensor board and inexpensive, $12-$85, diode laser).\nAs a result, we measured the intensity of the laser beam, having the same focus, at different distances.\nThe results are shown in Figure 29.\n950 1000 1050 1100 0 50\u00a0100\u00a0150\u00a0200 Distance [m] Intensity[ADCcount] 35mW 7mW Figure 29.\nLocalization Range for the Spotlight system From Figure 29, it can be observed that only a minor decrease in the intensity occurs, due to absorption and possibly our imperfect focusing of the laser beam.\nA linear fit of the experimental data shows that distances of up to 6500m can be achieved.\nWhile we do not expect atmospheric conditions, over large distances, to be similar to our 200m evaluation, there is strong evidence that distances (i.e. altitude) of 1000-2000m can easily be achieved.\nThe angle between the laser beam and the vertical should be minimized (less than 45\u00b0), as it reduces the difference between the beam cross-section (event size) and the actual projection of the beam on the ground.\nIn a similar manner, we were interested in finding out the maximum size of an event, that can be generated by a COTS laser and that is detectable by the existing photosensor.\nFor this, we 23 varied the divergence of the laser beam and measured the light intensity, as given by the ADC count.\nThe results are shown in Figure 30.\nIt can be observed that for the less powerful laser, an event size of 1.5m is the limit.\nFor the more powerful laser, the event size can be as high as 4m.\nThrough our extensive performance evaluation results, we have shown that the Spotlight system is a feasible, highly accurate, low cost solution for localization of wireless sensor networks.\nFrom our experience with sources of laser radiation, we believe that for small and medium size sensor network deployments, in areas of less than 20,000m2 , the Area Cover scheme is a viable solution.\nFor large size sensor network deployments, the Line Scan, or an incremental use of the Area Cover are very good options.\n0 200 400 600 800 1000 1200 0 50\u00a0100\u00a0150\u00a0200 Event Size [cm] Intensity[ADCcount] 35mW 7mW Figure 30.\nDetectable Event Sizes that can be generated by COTS lasers 6.\nOPTIMIZATIONS/LESSONS LEARNED 6.1 Distributed Spotlight System The proposed design and the implementation of the Spotlight system can be considered centralized, due to the gathering of the sensor data and the execution of the Localization Function L(t) by the Spotlight device.\nWe show that this design can easily be transformed into a distributed one, by offering two solutions.\nOne idea is to disseminate in the network, information about the path of events, generated by the EDF (similar to an equation, describing a path), and let the sensor nodes execute the Localization Function.\nFor example, in the Line Scan scenario, if the starting and ending points for the horizontal and vertical scans, and the times they were reached, are propagated in the network, then any sensor in the network can obtain its location (assuming a constant scanning speed).\nA second solution is to use anchor nodes which know their positions.\nIn the case of Line Scan, if three anchors are present, after detecting the presence of the two events, the anchors flood the network with their locations and times of detection.\nUsing the same simple formulas as in the previous scheme, all sensor nodes can infer their positions.\n6.2 Localization Overhead Reduction Another requirement imposed by the Spotlight system design, is the use of a time synchronization protocol between the Spotlight device and the sensor network.\nRelaxing this requirement and imposing only a time synchronization protocol among sensor nodes is a very desirable objective.\nThe idea is to use the knowledge that the Spotlight device has about the speed with which the scanning of the sensor field takes place.\nIf the scanning speed is constant (let``s call it s), then the time difference (let``s call it \u0394t) between the event detections of two sensor nodes is, in fact, an accurate measure of the range between them: d=s*\u0394t.\nHence, the Spotlight system can be used for accurate ranging of the distance between any pair of sensor nodes.\nAn important observation is that this ranging technique does not suffer from limitations of others: small range and directionality for ultrasound, or irregularity, fading and multipath for Received Signal Strength Indicator (RSSI).\nAfter the ranges between nodes have been determined (either in a centralized or distributed manner) graph embedding algorithms can be used for a realization of a rigid graph, describing the sensor network topology.\n6.3 Dynamic Event Distribution Function E(t) Another system optimization is for environments where the sensor node density is not uniform.\nOne disadvantage of the Line Scan technique, when compared to the Area Cover, is the localization time.\nAn idea is to use two scans: one which uses a large event size (hence larger localization errors), followed by a second scan in which the event size changes dynamically.\nThe first scan is used for identifying the areas with a higher density of sensor nodes.\nThe second scan uses a larger event in areas where the sensor node density is low and a smaller event in areas with a higher sensor node density.\nA dynamic EDF can also be used when it is very difficult to meet the power requirements for the Spotlight device (imposed by the use of the Area Cover scheme in a very large area).\nIn this scenario, a hybrid scheme can be used: the first scan (Point Scan) is performed quickly, with a very large event size, and it is meant to identify, roughly, the location of the sensor network.\nSubsequent Area Cover scans will be executed on smaller portions of the network, until the entire deployment area is localized.\n6.4 Stealthiness Our implementation of the Spotlight system used visible light for creating events.\nUsing the system during the daylight or in a room well lit, poses challenges due to the solar or fluorescent lamp radiation, which generate a strong background noise.\nThe alternative, which we used in our performance evaluations, was to use the system in a dark room (\u03bcSpotlight system) or during the night (Spotlight system).\nWhile using the Spotlight system during the night is a good solution for environments where stealthiness is not important (e.g. environmental sciences) for others (e.g. military applications), divulging the presence and location of a sensor field, could seriously compromise the efficacy of the system.\nFigure 31.\nFluorescent Light Spectra (top), Spectral Response for CdSe cells (bottom) A solution to this problem, which we experimented with in the \u00b5Spotlight system, was to use an optical filter on top of the light 24 sensor.\nThe spectral response of a CdSe photo sensor spans almost the entire visible domain [37], with a peak at about 700nm (Figure 31-bottom).\nAs shown in Figure 31-top, the fluorescent light has no significant components above 700nm.\nHence, a simple red filter (Schott RG-630), which transmits all light with wavelength approximately above 630nm, coupled with an Event Distribution Function that generates events with wavelengths above the same threshold, would allow the use of the system when a fluorescent light is present.\nA solution for the Spotlight system to be stealthy at night, is to use a source of infra-red radiation (i.e. laser) emitting in the range [750, 1000]nm.\nFor a daylight use of the Spotlight system, the challenge is to overcome the strong background of the natural light.\nA solution we are considering is the use of a narrow-band optical filter, centered at the wavelength of the laser radiation.\nThe feasibility and the cost-effectiveness of this solution remain to be proven.\n6.5 Network Deployed in Unknown Terrain A further generalization is when the map of the terrain where the sensor network was deployed is unknown.\nWhile this is highly unlikely for many civil applications of wireless sensor network technologies, it is not difficult to imagine military applications where the sensor network is deployed in a hostile and unknown terrain.\nA solution to this problem is a system that uses two Spotlight devices, or equivalently, the use of the same device from two distinct positions, executing, from each of them, a complete localization procedure.\nIn this scheme, the position of the sensor node is uniquely determined by the intersection of the two location directions obtained by the system.\nThe relative localization (for each pair of Spotlight devices) will require an accurate knowledge of the 3 translation and 3 rigid-body rotation parameters for Spotlight``s position and orientation (as mentioned in Section 3).\nThis generalization is also applicable to scenarios where, due to terrain variations, there is no single aerial point with a direct line of sight to all sensor nodes, e.g. hilly terrain.\nBy executing the localization procedure from different aerial points, the probability of establishing a line of sight with all the nodes, increases.\nFor some military scenarios [1] [12], where open terrain is prevalent, the existence of a line of sight is not a limiting factor.\nIn light of this, the Spotlight system can not be used in forests or indoor environments.\n7.\nCONCLUSIONS AND FUTURE WORK In this paper we presented the design, implementation and evaluation of a localization system for wireless sensor networks, called Spotlight.\nOur localization solution does not require any additional hardware for the sensor nodes, other than what already exists.\nAll the complexity of the system is encapsulated into a single Spotlight device.\nOur localization system is reusable, i.e. the costs can be amortized through several deployments, and its performance is not affected by the number of sensor nodes in the network.\nOur experimental results, obtained from a real system deployed outdoors, show that the localization error is less than 20cm.\nThis error is currently state of art, even for range-based localization systems and it is 75% smaller than the error obtained when using GPS devices or when the manual deployment of sensor nodes is a feasible option [31].\nAs future work, we would like to explore the self-calibration and self-tuning of the Spotlight system.\nThe accuracy of the system can be further improved if the distribution of the event, instead of a single timestamp, is reported.\nA generalization could be obtained by reformulating the problem as an angular estimation problem that provides the building blocks for more general localization techniques.\n8.\nACKNOWLEDGEMENTS This work was supported by the DARPA IXO office, under the NEST project (grant number F336616-01-C-1905) and by the NSF grant CCR-0098269.\nWe would like to thank S. Cornwell for allowing us to run experiments in the stadium, M. Klopf for his assistance with optics, and anonymous reviewers and our shepherd, Koen Langendoen, for their valuable feedback.\n9.\nREFERENCES [1] A. Arora, P. Dutta, S. Bapat, V. Kulathumani, H. Zhang, V. Naik, V. Mittal, H. Cao, M. Demirbas, M. Gouda, Y. Choi, T. Herman, S. Kulharni, U. Arumugam, M. Nesterenko, A. Vora, M. Miyashita, A Line in the Sand: A Wireless Sensor Network for Target Detection, Classification and Tracking, in Computer Networks 46(5), 2004.\n[2] P. Bahl, V.N. Padmanabhan, RADAR: An In-Building RFbased User Location and Tracking System, in Proceedings of Infocom, 2000 [3] M. Broxton, J. Lifton, J. Paradiso, Localizing a Sensor Network via Collaborative Processing of Global Stimuli, in Proceedings of EWSN, 2005.\n[4] N. Bulusu, J. Heidemann, D. Estrin, GPS-less Low Cost Outdoor Localization for Very Small Devices, in IEEE Personal Communications Magazine, 2000.\n[5] P. Corke, R. Peterson, D. Rus, Networked Robots: Flying Robot Navigation Using a Sensor Net, in ISSR, 2003.\n[6] L. Doherty, L. E. Ghaoui, K. Pister, Convex Position Estimation in Wireless Sensor Networks, in Proceedings of Infocom, 2001 [7] P. Dutta, M. Grimmer, A. Arora, S. Bibyk, D. Culler, Design of a Wireless Sensor Network Platform for Detecting Rare, Random, and Ephemeral Events, in Proceedings of IPSN, 2005.\n[8] E. Elnahrawy, X. Li, R. Martin, The Limits of Localization using RSSI, in Proceedings of SECON, 2004.\n[9] D. Fox, W. Burgard, S. Thrun, Markov Localization for Mobile Robots in Dynamic Environments, in Journal of Artificial Intelligence Research, 1999.\n[10] D. Fox, W. Burgard, F. Dellaert, S. Thrun, Monte Carlo Localization: Efficient Position Estimation for Mobile Robots, in Conference on Artificial Intelligence, 2000.\n[11] D. Ganesan, B. Krishnamachari, A. Woo, D. Culler, D. Estrin, S. Wicker, Complex Behaviour at Scale: An Experimental Study of Low Power Wireless Sensor Networks, in Technical Report, UCLA-TR 01-0013, 2001.\n[12] T. He, S. Krishnamurthy, J. A. Stankovic, T. Abdelzaher, L. Luo, R. Stoleru, T. Yan, L. Gu, J. Hui, B. Krogh, An Energy-Efficient Surveillance System Using Wireless Sensor Networks, in Proceedings of Mobisys, 2004.\n[13] T. He, C. Huang, B. Blum, J. A. Stankovic, T. Abdelzaher, Range-Free Localization Schemes for Large Scale Sensor Networks in Proceedings of Mobicom, 2003.\n[14] L. Hu, D. Evans, Localization for Mobile Sensor Networks, in Proceedings of Mobicom, 2004.\n[15] Y. Kwon, K. Mechitov, S. Sundresh, W. Kim, G. Agha, Resilient Localization for Sensor Networks in Outdoor Environments, UIUC Technical Report, 2004.\n25 [16] K. Langendoen, N. Reijers, Distributed Localization in Wireless Sensor Networks, A Comparative Study, in Computer Networks vol.\n43, 2003.\n[17] K. Lorincz, M. Welsh, MoteTrack: A Robust, Decentralized Approach to RF-Based Location Tracking, in Proceedings of Intl..\nWorkshop on Location and Context-Awareness, 2005.\n[18] M. Maroti, B. Kusy, G. Simon, A. Ledeczi, The Flooding Time Synchronization Protocol, in Proceedings of Sensys, 2004.\n[19] D. Moore, J. Leonard, D. Rus, S. Teller, Robust Distributed Network Localization with Noisy Range Measurements in Proceedings of Sensys, 2004.\n[20] R. Nagpal, H. Shrobe, J. Bachrach, Organizing a Global Coordinate System for Local Information on an Adhoc Sensor Network, in A.I Memo 1666.\nMIT A.I. Laboratory, 1999.\n[21] D. Niculescu, B. Nath, DV-based Positioning in Adhoc Networks in Telecommunication Systems, vol.\n22, 2003.\n[22] E. Osterweil, T. Schoellhammer, M. Rahimi, M. Wimbrow, T. Stathopoulos, L.Girod, M. Mysore, A.Wu, D. Estrin, The Extensible Sensing System, CENS-UCLA poster, 2004.\n[23] B.W. Parkinson, J. Spilker, Global Positioning System: theory and applications, in Progress in Aeronautics and Astronautics, vol.\n163, 1996.\n[24] P.N. Pathirana, N. Bulusu, A. Savkin, S. Jha, Node Localization Using Mobile Robots in Delay-Tolerant Sensor Networks, in Transactions on Mobile Computing, 2004.\n[25] N. Priyantha, A. Chakaborty, H. Balakrishnan, The Cricket Location-support System, in Proceedings of MobiCom, 2000.\n[26] N. Priyantha, H. Balakrishnan, E. Demaine, S. Teller, Mobile-Assisted Topology Generation for Auto-Localization in Sensor Networks, in Proceedings of Infocom, 2005.\n[27] A. Savvides, C. Han, M. Srivastava, Dynamic Fine-grained localization in Adhoc Networks of Sensors, in Proceedings of MobiCom, 2001.\n[28] Y. Shang, W. Ruml, Improved MDS-Based Localization, in Proceedings of Infocom, 2004.\n[29] M. Sichitiu, V. Ramadurai,Localization of Wireless Sensor Networks with a Mobile Beacon, in Proceedings of MASS, 2004.\n[30] G. Simon, M. Maroti, A. Ledeczi, G. Balogh, B. Kusy, A. Nadas, G. Pap, J. Sallai, Sensor Network-Base Countersniper System, in Proceedings of Sensys, 2004.\n[31] R. Stoleru, T. He, J.A. Stankovic, Walking GPS: A Practical Solution for Localization in Manually Deployed Wireless Sensor Networks, in Proceedings of EmNetS, 2004.\n[32] R. Stoleru, J.A. Stankovic, Probability Grid: A Location Estimation Scheme for Wireless Sensor Networks, in Proceedings of SECON, 2004.\n[33] R. Szewczyk, A. Mainwaring, J. Polastre, J. Anderson, D. Culler, An Analysis of a Large Scale Habitat Monitoring Application, in Proceedings of Sensys, 2004.\n[34] K. Whitehouse, A. Woo, C. Karlof, F. Jiang, D. Culler, The Effects of Ranging Noise on Multi-hop Localization: An Empirical Study, in Proceedings of IPSN, 2005.\n[35] K. Whitehouse, D. Culler, Calibration as Parameter Estimation in Sensor Networks, in Proceedings of WSNA, 2002.\n[36] P. Zhang, C. Sadler, S. A. Lyon, M. Martonosi, Hardware Design Experiences in ZebraNet, in Proceedings of Sensys, 2004.\n[37] Selco Products Co..\nConstruction and Characteristics of CdS Cells, product datasheet, 2004 26", "lvl-3": "A High-Accuracy , Low-Cost Localization System for Wireless Sensor Networks\nABSTRACT\nThe problem of localization of wireless sensor nodes has long been regarded as very difficult to solve , when considering the realities of real world environments .\nIn this paper , we formally describe , design , implement and evaluate a novel localization system , called Spotlight .\nOur system uses the spatio-temporal properties of well controlled events in the network ( e.g. , light ) , to obtain the locations of sensor nodes .\nWe demonstrate that a high accuracy in localization can be achieved without the aid of expensive hardware on the sensor nodes , as required by other localization systems .\nWe evaluate the performance of our system in deployments of Mica2 and XSM motes .\nThrough performance evaluations of a real system deployed outdoors , we obtain a 20cm localization error .\nA sensor network , with any number of nodes , deployed in a 2500m2 area , can be localized in under 10 minutes , using a device that costs less than $ 1000 .\nTo the best of our knowledge , this is the first report of a sub-meter localization error , obtained in an outdoor environment , without equipping the wireless sensor nodes with specialized ranging hardware .\n1 .\nINTRODUCTION\nRecently , wireless sensor network systems have been used in many promising applications including military surveillance , habitat monitoring , wildlife tracking etc. [ 12 ] [ 22 ] [ 33 ] [ 36 ] .\nWhile many middleware services , to support these applications , have been designed and implemented successfully , localization - finding the position of sensor nodes - remains one of the most difficult research challenges to be solved practically .\nSince most emerging applications based on networked sensor nodes require location awareness to assist their operations , such as annotating sensed data with location context , it is an indispensable requirement for a sensor node to be able to find its own location .\nMany approaches have been proposed in the literature [ 4 ] [ 6 ] [ 13 ] [ 14 ] [ 19 ] [ 20 ] [ 21 ] [ 23 ] [ 27 ] [ 28 ] , however it is still not clear how these solutions can be practically and economically deployed .\nAn on-board GPS [ 23 ] is a typical high-end solution , which requires sophisticated hardware to achieve high resolution time synchronization with satellites .\nThe constraints on power and cost for tiny sensor nodes preclude this as a viable solution .\nOther solutions require per node devices that can perform ranging among neighboring nodes .\nThe difficulties of these approaches are twofold .\nFirst , under constraints of form factor and power supply , the effective ranges of such devices are very limited .\nFor example the effective range of the ultrasonic transducers used in the Cricket system is less than 2 meters when the sender and receiver are not facing each other [ 26 ] .\nSecond , since most sensor nodes are static , i.e. the location is not expected to change , it is not cost-effective to equip these sensors with special circuitry just for a one-time localization .\nTo overcome these limitations , many range-free localization schemes have been proposed .\nMost of these schemes estimate the location of sensor nodes by exploiting the radio connectivity information among neighboring nodes .\nThese approaches eliminate the need of high-cost specialized hardware , at the cost of a less accurate localization .\nIn addition , the radio propagation characteristics vary over time and are environment dependent , thus imposing high calibration costs for the range-free localizations schemes .\nWith such limitations in mind , this paper addresses the following research challenge : How to reconcile the need for high accuracy in location estimation with the cost to achieve it .\nOur answer to this challenge is a localization system called Spotlight .\nThis system employs an asymmetric architecture , in which sensor nodes do not need any additional hardware , other than what they currently have .\nAll the sophisticated hardware and computation reside on a single Spotlight device .\nThe Spotlight device uses a steerable laser light source , illuminating the sensor nodes placed within a known terrain .\nWe demonstrate that this localization is much more accurate ( i.e. , tens of centimeters ) than the range-based localization schemes and that it has a much longer effective range ( i.e. , thousands of meters ) than the solutions based on ultra-sound/acoustic ranging .\nAt the same time , since only a single sophisticated device is needed to localize the whole network , the amortized cost is much smaller than the cost to add hardware components to the individual sensors .\n2 .\nRELATED WORK\nIn this section , we discuss prior work in localization in two major categories : the range-based localization schemes ( which use either expensive , per node , ranging devices for high accuracy , or less accurate ranging solutions , as the Received Signal Strength Indicator ( RSSI ) ) , and the range-free schemes , which use only connectivity information ( hop-by-hop ) as an indication of proximity among the nodes .\nThe localization problem is a fundamental research problem in many domains .\nIn the field of robotics , it has been studied extensively [ 9 ] [ 10 ] .\nThe reported localization errors are on the order of tens of centimeters , when using specialized ranging hardware , i.e. laser range finder or ultrasound .\nDue to the high cost and non-negligible form factor of the ranging hardware , these solutions can not be simply applied to sensor networks .\nThe RSSI has been an attractive solution for estimating the distance between the sender and the receiver .\nThe RADAR system [ 2 ] uses the RSSI to build a centralized repository of signal strengths at various positions with respect to a set of beacon nodes .\nThe location of a mobile user is estimated within a few meters .\nIn a similar approach , MoteTrack [ 17 ] distributes the reference RSSI values to the beacon nodes .\nSolutions that use RSSI and do not require beacon nodes have also been proposed [ 5 ] [ 14 ] [ 24 ] [ 26 ] [ 29 ] .\nThey all share the idea of using a mobile beacon .\nThe sensor nodes that receive the beacons , apply different algorithms for inferring their location .\nIn [ 29 ] , Sichitiu proposes a solution in which the nodes that receive the beacon construct , based on the RSSI value , a constraint on their position estimate .\nIn [ 26 ] , Priyantha et al. propose MAL , a localization method in which a mobile node ( moving strategically ) assists in measuring distances between node pairs , until the constraints on distances generate a rigid graph .\nIn [ 24 ] , Pathirana et al. formulate the localization problem as an on-line estimation in a nonlinear dynamic system and proposes a Robust Extended Kalman Filter for solving it .\nElnahrawy [ 8 ] provides strong evidence of inherent limitations of localization accuracy using RSSI , in indoor environments .\nA more precise ranging technique uses the time difference between a radio signal and an acoustic wave , to obtain pair wise distances between sensor nodes .\nThis approach produces smaller localization errors , at the cost of additional hardware .\nThe Cricket location-support system [ 25 ] can achieve a location granularity of tens of centimeters with short range ultrasound transceivers .\nAHLoS , proposed by Savvides et al. [ 27 ] , employs Time of Arrival ( ToA ) ranging techniques that require extensive hardware and solving relatively large nonlinear systems of equations .\nA similar ToA technique is employed in [ 3 ] .\nIn [ 30 ] , Simon et al. implement a distributed system ( using acoustic ranging ) which locates a sniper in an urban terrain .\nAcoustic ranging for localization is also used by Kwon et al. [ 15 ] .\nThe reported errors in localization vary from 2.2 m to 9.5 m , depending on the type ( centralized vs. distributed ) of the Least Square Scaling algorithm used .\nFor wireless sensor networks ranging is a difficult option .\nThe hardware cost , the energy expenditure , the form factor , the small range , all are difficult compromises , and it is hard to envision cheap , unreliable and resource-constraint devices make use of range-based localization solutions .\nHowever , the high localization accuracy , achievable by these schemes is very desirable .\nTo overcome the challenges posed by the range-based localization schemes , when applied to sensor networks , a different approach has been proposed and evaluated in the past .\nThis approach is called range-free and it attempts to obtain location information from the proximity to a set of known beacon nodes .\nBulusu et al. propose in [ 4 ] a localization scheme , called Centroid , in which each node localizes itself to the centroid of its proximate beacon nodes .\nIn [ 13 ] , He et al. propose APIT , a scheme in which each node decides its position based on the possibility of being inside or outside of a triangle formed by any three beacon nodes heard by the node .\nThe Global Coordinate System [ 20 ] , developed at MIT , uses apriori knowledge of the node density in the network , to estimate the average hop distance .\nThe DV - * family of localization schemes [ 21 ] , uses the hop count from known beacon nodes to the nodes in the network to infer the distance .\nThe majority of range-free localization schemes have been evaluated in simulations , or controlled environments .\nSeveral studies [ 11 ] [ 32 ] [ 34 ] have emphasized the challenges that real environments pose .\nLangendoen and Reijers present a detailed , comparative study of several localization schemes in [ 16 ] .\nTo the best of our knowledge , Spotlight is the first range-free localization scheme that works very well in an outdoor environment .\nOur system requires a line of sight between a single device and the sensor nodes , and the map of the terrain where the sensor field is located .\nThe Spotlight system has a long effective range ( 1000 's meters ) and does not require any infrastructure or additional hardware for sensor nodes .\nThe Spotlight system combines the advantages and does not suffer from the disadvantages of the two localization classes .\n3 .\nSPOTLIGHT SYSTEM DESIGN\nSpotlight system\n3.1 Definitions and Problem Formulation\n3.2 Point Scan Event Distribution Function\n3.3 Line Scan Event Distribution Function\n3.4 Area Cover Event Distribution Function\n3.5 Event Distribution Function Analysis\n3.6 Localization Error Analysis\n4 .\nSYSTEM IMPLEMENTATION\n4.1 \u00b5Spotlight System\n4.2 Spotlight System\n4.3 Event Detection Function D ( t )\n4.4 Localization Function L ( T )\n4.5 Time Synchronization\n5 .\nPERFORMANCE EVALUATION\n5.1 Point Scan - \u03bcSpotlight system\nScan EDF\nScan EDF\n5.2 Line Scan - \u03bcSpotlight system\nScan EDF\n5.3 Area Cover - \u03bcSpotlight system\n5.4 Point Scan - Spotlight system\nCOTS lasers 6 .\nOPTIMIZATIONS/LESSONS LEARNED\n6.1 Distributed Spotlight System\n6.2 Localization Overhead Reduction\n6.3 Dynamic Event Distribution Function E ( t )\n6.4 Stealthiness\n6.5 Network Deployed in Unknown Terrain\n7 .\nCONCLUSIONS AND FUTURE WORK\nIn this paper we presented the design , implementation and evaluation of a localization system for wireless sensor networks , called Spotlight .\nOur localization solution does not require any additional hardware for the sensor nodes , other than what already exists .\nAll the complexity of the system is encapsulated into a single Spotlight device .\nOur localization system is reusable , i.e. the costs can be amortized through several deployments , and its performance is not affected by the number of sensor nodes in the network .\nOur experimental results , obtained from a real system deployed outdoors , show that the localization error is less than 20cm .\nThis error is currently state of art , even for range-based localization systems and it is 75 % smaller than the error obtained when using GPS devices or when the manual deployment of sensor nodes is a feasible option [ 31 ] .\nAs future work , we would like to explore the self-calibration and self-tuning of the Spotlight system .\nThe accuracy of the system can be further improved if the distribution of the event , instead of a single timestamp , is reported .\nA generalization could be obtained by reformulating the problem as an angular estimation problem that provides the building blocks for more general localization techniques .", "lvl-4": "A High-Accuracy , Low-Cost Localization System for Wireless Sensor Networks\nABSTRACT\nThe problem of localization of wireless sensor nodes has long been regarded as very difficult to solve , when considering the realities of real world environments .\nIn this paper , we formally describe , design , implement and evaluate a novel localization system , called Spotlight .\nOur system uses the spatio-temporal properties of well controlled events in the network ( e.g. , light ) , to obtain the locations of sensor nodes .\nWe demonstrate that a high accuracy in localization can be achieved without the aid of expensive hardware on the sensor nodes , as required by other localization systems .\nWe evaluate the performance of our system in deployments of Mica2 and XSM motes .\nThrough performance evaluations of a real system deployed outdoors , we obtain a 20cm localization error .\nA sensor network , with any number of nodes , deployed in a 2500m2 area , can be localized in under 10 minutes , using a device that costs less than $ 1000 .\nTo the best of our knowledge , this is the first report of a sub-meter localization error , obtained in an outdoor environment , without equipping the wireless sensor nodes with specialized ranging hardware .\n1 .\nINTRODUCTION\nRecently , wireless sensor network systems have been used in many promising applications including military surveillance , habitat monitoring , wildlife tracking etc. [ 12 ] [ 22 ] [ 33 ] [ 36 ] .\nWhile many middleware services , to support these applications , have been designed and implemented successfully , localization - finding the position of sensor nodes - remains one of the most difficult research challenges to be solved practically .\nAn on-board GPS [ 23 ] is a typical high-end solution , which requires sophisticated hardware to achieve high resolution time synchronization with satellites .\nThe constraints on power and cost for tiny sensor nodes preclude this as a viable solution .\nOther solutions require per node devices that can perform ranging among neighboring nodes .\nThe difficulties of these approaches are twofold .\nFirst , under constraints of form factor and power supply , the effective ranges of such devices are very limited .\nFor example the effective range of the ultrasonic transducers used in the Cricket system is less than 2 meters when the sender and receiver are not facing each other [ 26 ] .\nSecond , since most sensor nodes are static , i.e. the location is not expected to change , it is not cost-effective to equip these sensors with special circuitry just for a one-time localization .\nTo overcome these limitations , many range-free localization schemes have been proposed .\nMost of these schemes estimate the location of sensor nodes by exploiting the radio connectivity information among neighboring nodes .\nThese approaches eliminate the need of high-cost specialized hardware , at the cost of a less accurate localization .\nIn addition , the radio propagation characteristics vary over time and are environment dependent , thus imposing high calibration costs for the range-free localizations schemes .\nOur answer to this challenge is a localization system called Spotlight .\nThis system employs an asymmetric architecture , in which sensor nodes do not need any additional hardware , other than what they currently have .\nAll the sophisticated hardware and computation reside on a single Spotlight device .\nThe Spotlight device uses a steerable laser light source , illuminating the sensor nodes placed within a known terrain .\nAt the same time , since only a single sophisticated device is needed to localize the whole network , the amortized cost is much smaller than the cost to add hardware components to the individual sensors .\n2 .\nRELATED WORK\nThe localization problem is a fundamental research problem in many domains .\nThe reported localization errors are on the order of tens of centimeters , when using specialized ranging hardware , i.e. laser range finder or ultrasound .\nDue to the high cost and non-negligible form factor of the ranging hardware , these solutions can not be simply applied to sensor networks .\nThe RSSI has been an attractive solution for estimating the distance between the sender and the receiver .\nThe RADAR system [ 2 ] uses the RSSI to build a centralized repository of signal strengths at various positions with respect to a set of beacon nodes .\nThe location of a mobile user is estimated within a few meters .\nIn a similar approach , MoteTrack [ 17 ] distributes the reference RSSI values to the beacon nodes .\nSolutions that use RSSI and do not require beacon nodes have also been proposed [ 5 ] [ 14 ] [ 24 ] [ 26 ] [ 29 ] .\nThey all share the idea of using a mobile beacon .\nThe sensor nodes that receive the beacons , apply different algorithms for inferring their location .\nIn [ 29 ] , Sichitiu proposes a solution in which the nodes that receive the beacon construct , based on the RSSI value , a constraint on their position estimate .\nIn [ 24 ] , Pathirana et al. formulate the localization problem as an on-line estimation in a nonlinear dynamic system and proposes a Robust Extended Kalman Filter for solving it .\nElnahrawy [ 8 ] provides strong evidence of inherent limitations of localization accuracy using RSSI , in indoor environments .\nA more precise ranging technique uses the time difference between a radio signal and an acoustic wave , to obtain pair wise distances between sensor nodes .\nThis approach produces smaller localization errors , at the cost of additional hardware .\nThe Cricket location-support system [ 25 ] can achieve a location granularity of tens of centimeters with short range ultrasound transceivers .\nAHLoS , proposed by Savvides et al. [ 27 ] , employs Time of Arrival ( ToA ) ranging techniques that require extensive hardware and solving relatively large nonlinear systems of equations .\nIn [ 30 ] , Simon et al. implement a distributed system ( using acoustic ranging ) which locates a sniper in an urban terrain .\nAcoustic ranging for localization is also used by Kwon et al. [ 15 ] .\nThe reported errors in localization vary from 2.2 m to 9.5 m , depending on the type ( centralized vs. distributed ) of the Least Square Scaling algorithm used .\nFor wireless sensor networks ranging is a difficult option .\nHowever , the high localization accuracy , achievable by these schemes is very desirable .\nTo overcome the challenges posed by the range-based localization schemes , when applied to sensor networks , a different approach has been proposed and evaluated in the past .\nThis approach is called range-free and it attempts to obtain location information from the proximity to a set of known beacon nodes .\nBulusu et al. propose in [ 4 ] a localization scheme , called Centroid , in which each node localizes itself to the centroid of its proximate beacon nodes .\nThe Global Coordinate System [ 20 ] , developed at MIT , uses apriori knowledge of the node density in the network , to estimate the average hop distance .\nThe DV - * family of localization schemes [ 21 ] , uses the hop count from known beacon nodes to the nodes in the network to infer the distance .\nThe majority of range-free localization schemes have been evaluated in simulations , or controlled environments .\nLangendoen and Reijers present a detailed , comparative study of several localization schemes in [ 16 ] .\nTo the best of our knowledge , Spotlight is the first range-free localization scheme that works very well in an outdoor environment .\nOur system requires a line of sight between a single device and the sensor nodes , and the map of the terrain where the sensor field is located .\nThe Spotlight system has a long effective range ( 1000 's meters ) and does not require any infrastructure or additional hardware for sensor nodes .\nThe Spotlight system combines the advantages and does not suffer from the disadvantages of the two localization classes .\n7 .\nCONCLUSIONS AND FUTURE WORK\nIn this paper we presented the design , implementation and evaluation of a localization system for wireless sensor networks , called Spotlight .\nOur localization solution does not require any additional hardware for the sensor nodes , other than what already exists .\nAll the complexity of the system is encapsulated into a single Spotlight device .\nOur localization system is reusable , i.e. the costs can be amortized through several deployments , and its performance is not affected by the number of sensor nodes in the network .\nOur experimental results , obtained from a real system deployed outdoors , show that the localization error is less than 20cm .\nThis error is currently state of art , even for range-based localization systems and it is 75 % smaller than the error obtained when using GPS devices or when the manual deployment of sensor nodes is a feasible option [ 31 ] .\nAs future work , we would like to explore the self-calibration and self-tuning of the Spotlight system .\nThe accuracy of the system can be further improved if the distribution of the event , instead of a single timestamp , is reported .\nA generalization could be obtained by reformulating the problem as an angular estimation problem that provides the building blocks for more general localization techniques .", "lvl-2": "A High-Accuracy , Low-Cost Localization System for Wireless Sensor Networks\nABSTRACT\nThe problem of localization of wireless sensor nodes has long been regarded as very difficult to solve , when considering the realities of real world environments .\nIn this paper , we formally describe , design , implement and evaluate a novel localization system , called Spotlight .\nOur system uses the spatio-temporal properties of well controlled events in the network ( e.g. , light ) , to obtain the locations of sensor nodes .\nWe demonstrate that a high accuracy in localization can be achieved without the aid of expensive hardware on the sensor nodes , as required by other localization systems .\nWe evaluate the performance of our system in deployments of Mica2 and XSM motes .\nThrough performance evaluations of a real system deployed outdoors , we obtain a 20cm localization error .\nA sensor network , with any number of nodes , deployed in a 2500m2 area , can be localized in under 10 minutes , using a device that costs less than $ 1000 .\nTo the best of our knowledge , this is the first report of a sub-meter localization error , obtained in an outdoor environment , without equipping the wireless sensor nodes with specialized ranging hardware .\n1 .\nINTRODUCTION\nRecently , wireless sensor network systems have been used in many promising applications including military surveillance , habitat monitoring , wildlife tracking etc. [ 12 ] [ 22 ] [ 33 ] [ 36 ] .\nWhile many middleware services , to support these applications , have been designed and implemented successfully , localization - finding the position of sensor nodes - remains one of the most difficult research challenges to be solved practically .\nSince most emerging applications based on networked sensor nodes require location awareness to assist their operations , such as annotating sensed data with location context , it is an indispensable requirement for a sensor node to be able to find its own location .\nMany approaches have been proposed in the literature [ 4 ] [ 6 ] [ 13 ] [ 14 ] [ 19 ] [ 20 ] [ 21 ] [ 23 ] [ 27 ] [ 28 ] , however it is still not clear how these solutions can be practically and economically deployed .\nAn on-board GPS [ 23 ] is a typical high-end solution , which requires sophisticated hardware to achieve high resolution time synchronization with satellites .\nThe constraints on power and cost for tiny sensor nodes preclude this as a viable solution .\nOther solutions require per node devices that can perform ranging among neighboring nodes .\nThe difficulties of these approaches are twofold .\nFirst , under constraints of form factor and power supply , the effective ranges of such devices are very limited .\nFor example the effective range of the ultrasonic transducers used in the Cricket system is less than 2 meters when the sender and receiver are not facing each other [ 26 ] .\nSecond , since most sensor nodes are static , i.e. the location is not expected to change , it is not cost-effective to equip these sensors with special circuitry just for a one-time localization .\nTo overcome these limitations , many range-free localization schemes have been proposed .\nMost of these schemes estimate the location of sensor nodes by exploiting the radio connectivity information among neighboring nodes .\nThese approaches eliminate the need of high-cost specialized hardware , at the cost of a less accurate localization .\nIn addition , the radio propagation characteristics vary over time and are environment dependent , thus imposing high calibration costs for the range-free localizations schemes .\nWith such limitations in mind , this paper addresses the following research challenge : How to reconcile the need for high accuracy in location estimation with the cost to achieve it .\nOur answer to this challenge is a localization system called Spotlight .\nThis system employs an asymmetric architecture , in which sensor nodes do not need any additional hardware , other than what they currently have .\nAll the sophisticated hardware and computation reside on a single Spotlight device .\nThe Spotlight device uses a steerable laser light source , illuminating the sensor nodes placed within a known terrain .\nWe demonstrate that this localization is much more accurate ( i.e. , tens of centimeters ) than the range-based localization schemes and that it has a much longer effective range ( i.e. , thousands of meters ) than the solutions based on ultra-sound/acoustic ranging .\nAt the same time , since only a single sophisticated device is needed to localize the whole network , the amortized cost is much smaller than the cost to add hardware components to the individual sensors .\n2 .\nRELATED WORK\nIn this section , we discuss prior work in localization in two major categories : the range-based localization schemes ( which use either expensive , per node , ranging devices for high accuracy , or less accurate ranging solutions , as the Received Signal Strength Indicator ( RSSI ) ) , and the range-free schemes , which use only connectivity information ( hop-by-hop ) as an indication of proximity among the nodes .\nThe localization problem is a fundamental research problem in many domains .\nIn the field of robotics , it has been studied extensively [ 9 ] [ 10 ] .\nThe reported localization errors are on the order of tens of centimeters , when using specialized ranging hardware , i.e. laser range finder or ultrasound .\nDue to the high cost and non-negligible form factor of the ranging hardware , these solutions can not be simply applied to sensor networks .\nThe RSSI has been an attractive solution for estimating the distance between the sender and the receiver .\nThe RADAR system [ 2 ] uses the RSSI to build a centralized repository of signal strengths at various positions with respect to a set of beacon nodes .\nThe location of a mobile user is estimated within a few meters .\nIn a similar approach , MoteTrack [ 17 ] distributes the reference RSSI values to the beacon nodes .\nSolutions that use RSSI and do not require beacon nodes have also been proposed [ 5 ] [ 14 ] [ 24 ] [ 26 ] [ 29 ] .\nThey all share the idea of using a mobile beacon .\nThe sensor nodes that receive the beacons , apply different algorithms for inferring their location .\nIn [ 29 ] , Sichitiu proposes a solution in which the nodes that receive the beacon construct , based on the RSSI value , a constraint on their position estimate .\nIn [ 26 ] , Priyantha et al. propose MAL , a localization method in which a mobile node ( moving strategically ) assists in measuring distances between node pairs , until the constraints on distances generate a rigid graph .\nIn [ 24 ] , Pathirana et al. formulate the localization problem as an on-line estimation in a nonlinear dynamic system and proposes a Robust Extended Kalman Filter for solving it .\nElnahrawy [ 8 ] provides strong evidence of inherent limitations of localization accuracy using RSSI , in indoor environments .\nA more precise ranging technique uses the time difference between a radio signal and an acoustic wave , to obtain pair wise distances between sensor nodes .\nThis approach produces smaller localization errors , at the cost of additional hardware .\nThe Cricket location-support system [ 25 ] can achieve a location granularity of tens of centimeters with short range ultrasound transceivers .\nAHLoS , proposed by Savvides et al. [ 27 ] , employs Time of Arrival ( ToA ) ranging techniques that require extensive hardware and solving relatively large nonlinear systems of equations .\nA similar ToA technique is employed in [ 3 ] .\nIn [ 30 ] , Simon et al. implement a distributed system ( using acoustic ranging ) which locates a sniper in an urban terrain .\nAcoustic ranging for localization is also used by Kwon et al. [ 15 ] .\nThe reported errors in localization vary from 2.2 m to 9.5 m , depending on the type ( centralized vs. distributed ) of the Least Square Scaling algorithm used .\nFor wireless sensor networks ranging is a difficult option .\nThe hardware cost , the energy expenditure , the form factor , the small range , all are difficult compromises , and it is hard to envision cheap , unreliable and resource-constraint devices make use of range-based localization solutions .\nHowever , the high localization accuracy , achievable by these schemes is very desirable .\nTo overcome the challenges posed by the range-based localization schemes , when applied to sensor networks , a different approach has been proposed and evaluated in the past .\nThis approach is called range-free and it attempts to obtain location information from the proximity to a set of known beacon nodes .\nBulusu et al. propose in [ 4 ] a localization scheme , called Centroid , in which each node localizes itself to the centroid of its proximate beacon nodes .\nIn [ 13 ] , He et al. propose APIT , a scheme in which each node decides its position based on the possibility of being inside or outside of a triangle formed by any three beacon nodes heard by the node .\nThe Global Coordinate System [ 20 ] , developed at MIT , uses apriori knowledge of the node density in the network , to estimate the average hop distance .\nThe DV - * family of localization schemes [ 21 ] , uses the hop count from known beacon nodes to the nodes in the network to infer the distance .\nThe majority of range-free localization schemes have been evaluated in simulations , or controlled environments .\nSeveral studies [ 11 ] [ 32 ] [ 34 ] have emphasized the challenges that real environments pose .\nLangendoen and Reijers present a detailed , comparative study of several localization schemes in [ 16 ] .\nTo the best of our knowledge , Spotlight is the first range-free localization scheme that works very well in an outdoor environment .\nOur system requires a line of sight between a single device and the sensor nodes , and the map of the terrain where the sensor field is located .\nThe Spotlight system has a long effective range ( 1000 's meters ) and does not require any infrastructure or additional hardware for sensor nodes .\nThe Spotlight system combines the advantages and does not suffer from the disadvantages of the two localization classes .\n3 .\nSPOTLIGHT SYSTEM DESIGN\nThe main idea of the Spotlight localization system is to generate controlled events in the field where the sensor nodes were deployed .\nAn event could be , for example , the presence of light in an area .\nUsing the time when an event is perceived by a sensor node and the spatio-temporal properties of the generated events , spatial information ( i.e. location ) regarding the sensor node can be inferred .\nFigure 1 .\nLocalization of a sensor network using the\nSpotlight system\nWe envision , and depict in Figure 1 , a sensor network deployment and localization scenario as follows : wireless sensor nodes are randomly deployed from an unmanned aerial vehicle .\nAfter deployment , the sensor nodes self-organize into a network and execute a time-synchronization protocol .\nAn aerial vehicle ( e.g. helicopter ) , equipped with a device , called Spotlight , flies over the network and generates light events .\nThe sensor nodes detect the events and report back to the Spotlight device , through a base station , the timestamps when the events were detected .\nThe Spotlight device computes the location of the sensor nodes .\nDuring the design of our Spotlight system , we made the following assumptions : - the sensor network to be localized is connected and a middleware , able to forward data from the sensor nodes to the Spotlight device , is present .\n- the aerial vehicle has a very good knowledge about its position and orientation ( 6 parameters : 3 translation and 3 rigid-body rotation ) and it possesses the map of the field where the network was deployed .\n- a powerful Spotlight device is available and it is able to generate\nspatially large events that can be detected by the sensor nodes , even in the presence of background noise ( daylight ) .\n- a line of sight between the Spotlight device and sensor nodes exists .\nOur assumptions are simplifying assumptions , meant to reduce the complexity of the presentation , for clarity .\nWe propose solutions that do not rely on these simplifying assumptions , in Section 6 .\nIn order to formally describe and generalize the Spotlight localization system , we introduce the following definitions .\n3.1 Definitions and Problem Formulation\nLet 's assume that the space A \u2282 R3 contains all sensor nodes N , and that each node Ni is positioned at pi ( x , y , z ) .\nTo obtain pi ( x , y , z ) , a Spotlight localization system needs to support three main functions , namely an Event Distribution Function ( EDF ) E ( t ) , an Event Detection Function D ( e ) , and a Localization Function L ( Ti ) .\nThey are formally defined as follows :\nDefinition 1 : An event e ( t , p ) is a detectable phenomenon that occurs at time t and at point p \u0454 A. Examples of events are light , heat , smoke , sound , etc. .\nLet Ti = { ti1 , ti2 , ... , tin } be a set of n timestamps of events detected by a node i. Let T ' = { t1 ' , t2 ' , ... , tm ' } be the set of m timestamps of events generated in the sensor field .\nDefinition 2 : The Event Detection Function D ( e ) defines a binary detection algorithm .\nFor a given event e :\ntrue , Event e is detected\nDefinition 4 : The Localization Function L ( Ti ) defines a localization algorithm with input Ti , a sequence of timestamps of events detected by the node i :\nFigure 2 .\nSpotlight system architecture\nAs shown in Figure 2 , the Event Detection Function D ( e ) is supported by the sensor nodes .\nIt is used to determine whether an external event happens or not .\nIt can be implemented through either a simple threshold-based detection algorithm or other advanced digital signal processing techniques .\nThe Event Distribution E ( t ) and Localization Functions L ( Ti ) are implemented by a Spotlight device .\nThe Localization function is an aggregation algorithm which calculates the intersection of multiple sets of points .\nThe Event Distribution Function E ( t ) describes the distribution of events over time .\nIt is the core of the Spotlight system and it is much more sophisticated than the other two functions .\nDue to the fact that E ( t ) is realized by the Spotlight device , the hardware requirements for the sensor nodes remain minimal .\nWith the support of these three functions , the localization process goes as follows :\n1 ) A Spotlight device distributes events in the space A over a period of time .\n2 ) During the event distribution , sensor nodes record the time sequence Ti = { ti1 , ti2 , ... , tin } at which they detect the events .\n3 ) After the event distribution , each sensor node sends the detection time sequence back to the Spotlight device .\n4 ) The Spotlight device estimates the location of a sensor node i , using the time sequence Ti and the known E ( t ) function .\nThe Event Distribution Function E ( t ) is the core technique used in the Spotlight system and we propose three designs for it .\nThese designs have different tradeoffs and the cost comparison is presented in Section 3.5 .\n3.2 Point Scan Event Distribution Function\nTo illustrate the basic functionality of a Spotlight system , we start with a simple sensor system where a set of nodes are placed along a straight line ( A = [ 0 , l ] \u2282 R ) .\nThe Spotlight device generates point events ( e.g. light spots ) along this line with constant speed s .\nThe set of timestamps of events detected by a node i is Ti = { ti1 } .\nThe Event Distribution Function E ( t ) is :\nwhere t \u2208 [ 0 , l/s ] .\nThe resulting localization function is :\nwhere D ( e ( ti1 , pi ) ) = true for node i positioned at pi .\nThe implementation of the Event Distribution Function E ( t ) is straightforward .\nAs shown in Figure 3 ( a ) , when a light source emits a beam of light with the angular speed given by\ngenerated along the line situated at distance d.\nFigure 3 .\nThe implementation of the Point Scan EDF\nThe Point Scan EDF can be generalized to the case where nodes are placed in a two dimensional plane R2 .\nIn this case , the Spotlight system progressively scans the plane to activate the sensor nodes .\nThis scenario is depicted in Figure 3 ( b ) .\n3.3 Line Scan Event Distribution Function\nSome devices , e.g. diode lasers , can generate an entire line of events simultaneously .\nWith these devices , we can support the Line Scan Event Distributed Function easily .\nWe assume that the \u23a7 is not detected\nsensor nodes are placed in a two dimensional plane ( A = [ l x l ] \u2282 R2 ) and that the scanning speed is s .\nThe set of timestamps of events detected by a node i is Ti = { ti1 , ti2 } .\nFigure 4 .\nThe implementation of the Line Scan EDF The Line Scan EDF is defined as follows :\nwhere D ( e ( ti1 , pi ) ) = true , D ( e ( ti2 , pi ) ) = true for node i positioned at pi .\n3.4 Area Cover Event Distribution Function\nOther devices , such as light projectors , can generate events that cover an area .\nThis allows the implementation of the Area Cover EDF .\nThe idea of Area Cover EDF is to partition the space A into multiple sections and assign a unique binary identifier , called code , to each section .\nLet 's suppose that the localization is done within a plane ( A \u2282 R2 ) .\nEach section Sk within A has a unique code k .\nThe Area Cover EDF is then defined as follows :\nwhere COG ( Sk ) denotes the center of gravity of Sk .\nWe illustrate the Area Cover EDF with a simple example .\nAs shown in Figure 5 , the plane A is divided in 16 sections .\nEach section Sk has a unique code k .\nThe Spotlight device distributes the events according to these codes : at time j a section Sk is covered by an event ( lit by light ) , if jth bit of k is 1 .\nA node residing anywhere in the section Sk is localized at the center of gravity of that section .\nFor example , nodes within section 1010 detect the events at time T = { 1 , 3 } .\nAt t = 4 the section where each node resides can be determined A more accurate localization requires a finer partitioning of the plane , hence the number of bits in the code will increase .\nConsidering the noise that is present in a real , outdoor environment , it is easy to observe that a relatively small error in detecting the correct bit pattern could result in a large localization error .\nReturning to the example shown in Figure 5 , if a sensor node is located in the section with code 0000 , and due to the noise , at time t = 3 , it thinks it detected an event , it will incorrectly conclude that its code is 1000 , and it positions itself two squares below its correct position .\nThe localization accuracy can deteriorate even further , if multiple errors are present in the transmission of the code .\nA natural solution to this problem is to use error-correcting codes , which greatly reduce the probability of an error , without paying the price of a re-transmission , or lengthening the transmission time too much .\nSeveral error correction schemes have been proposed in the past .\nTwo of the most notable ones are the Hamming ( 7 , 4 ) code and the Golay ( 23 , 12 ) code .\nBoth are perfect linear error correcting codes .\nThe Hamming coding scheme can detect up to 2-bit errors and correct 1-bit errors .\nIn the Hamming ( 7 , 4 ) scheme , a message having 4 bits of data ( e.g. dddd , where d is a data bit ) is transmitted as a 7-bit word by adding 3 error control bits ( e.g. dddpdpp , where p is a parity bit ) .\nFigure 5 .\nThe steps of Area Cover EDF .\nThe events cover\nthe shaded areas .\nThe steps of the Area Cover technique , when using Hamming ( 7 , 4 ) scheme are shown in Figure 6 .\nGolay codes can detect up to 6-bit errors and correct up to 3-bit errors .\nSimilar to Hamming ( 7 , 4 ) , Golay constructs a 23-bit codeword from 12-bit data .\nGolay codes have been used in satellite and spacecraft data transmission and are most suitable in cases where short codeword lengths are desirable .\nFigure 6 .\nThe steps of Area Cover EDF with Hamming ( 7 , 4 ) ECC .\nThe events cover the shaded areas .\nLet 's assume a 1-bit error probability of 0.01 , and a 12-bit message that needs to be transmitted .\nThe probability of a failed transmission is thus : 0.11 , if no error detection and correction is used ; 0.0061 for the Hamming scheme ( i.e. more than 1-bit error ) ; and 0.000076 for the Golay scheme ( i.e. more than 3-bit errors ) .\nGolay is thus 80 times more robust that the Hamming scheme , which is 20 times more robust than the no error correction scheme .\n\u23a7\nConsidering that a limited number of corrections is possible by any coding scheme , a natural question arises : can we minimize the localization error when there are errors that can not be corrected ?\nThis can be achieved by a clever placement of codes in the grid .\nAs shown in Figure 7 , the placement A , in the presence of a 1-bit error has a smaller average localization error when compared to the placement B .\nThe objective of our code placement strategy is to reduce the total Euclidean distance between all pairs of codes with Hamming distances smaller than K , the largest number of expected 1-bit errors .\nFigure 7 .\nDifferent code placement strategies\nFormally , a placement is represented by a function P : [ 0 , l ] d \u2192 C , which assigns a code to every coordinate in the d-dimensional cube of size l ( e.g. , in the planar case , we place codes in a 2dimensional grid ) .\nWe denote by dE ( i , j ) the Euclidean distance and by dH ( i , j ) the Hamming distance between two codes i and j .\nIn a noisy environment , dH ( i , j ) determines the crossover probability between the two codes .\nFor the case of independent detections , the higher dH ( i , j ) is , the lower the crossover probability will be .\nThe objective function is defined as follows :\nEquation 10 is a non-linear and non-convex programming problem .\nIn general , it is analytically hard to obtain the global minimum .\nTo overcome this , we propose a Greedy Placement method to obtain suboptimal results .\nIn this method we initialize the 2-dimensional grid with codes .\nThen we swap the codes within the grid repeatedly , to minimize the objective function .\nFor each swap , we greedily chose a pair of codes , which can reduce the objective function ( Equation 10 ) the most .\nThe proposed Greedy Placement method ends when no swap of codes can further minimize the objective function .\nFor evaluation , we compared the average localization error in the presence of K-bit error for two strategies : the proposed Greedy Placement and the Row-Major Placement ( it places the codes consecutively in the array , in row-first order ) .\nFigure 8 .\nLocalization error with code placement and no\nECC As Figure 8 shows , if no error detection/correction capability is present and 1-bit errors occur , then our Greedy Placement method can reduce the localization error by an average 23 % , when compared to the Row-Major Placement .\nIf error detection and correction schemes are used ( e.g. Hamming ( 12 , 8 ) and if 3-bit errors occur ( K = 3 ) then the Greedy Placement method reduces localization error by 12 % , when compared to the Row-Major Placement , as shown in Figure 9 .\nIf K = 1 , then there is no benefit in using the Greedy Placement method , since the 1-bit error can be corrected by the Hamming scheme .\nFigure 9 .\nLocalization error with code placement and\nHamming ECC\n3.5 Event Distribution Function Analysis\nAlthough all three aforementioned techniques are able to localize the sensor nodes , they differ in the localization time , communication overhead and energy consumed by the Event Distribution Function ( let 's call it Event Overhead ) .\nLet 's assume that all sensor nodes are located in a square with edge size D , and that the Spotlight device can generate N events ( e.g. Point , Line and Area Cover events ) every second and that the maximum tolerable localization error is r. Table 1 presents the execution cost comparison of the three different Spotlight techniques .\nTable 1 .\nExecution Cost Comparison\nTable 1 indicates that the Event Overhead for the Point Scan\nmethod is the smallest - it requires a one-time coverage of the area , hence the D2 .\nHowever the Point Scan takes a much longer time than the Area Cover technique , which finishes in logrD seconds .\nThe Line Scan method trades the Event Overhead well with the localization time .\nBy doubling the Event Overhead , the Line Scan method takes only r/2D percentage of time to complete , when compared with the Point Scan method .\nFrom Table 1 , it can be observed that the execution costs do not depend on the number of sensor nodes to be localized .\nIt is important to remark the ratio Event Overhead per unit time , which is indicative of the power requirement for the Spotlight device .\nThis ratio is constant for the Point Scan ( r2 * N ) while it grows linearly with area , for the Area Cover ( D2 * N/2 ) .\nIf the deployment area is very large , the use of the Area Cover EDF is prohibitively expensive , if not impossible .\nFor practical purposes , the Area Cover is a viable solution for small to medium size networks , while the Line Scan works well for large networks .\nWe discuss the implications of the power requirement for the Spotlight device , and offer a hybrid solution in Section 6 .\n3.6 Localization Error Analysis\nThe accuracy of localization with the Spotlight technique depends on many aspects .\nThe major factors that were considered during the implementation of the system are discussed below :\n- Time Synchronization : the Spotlight system exchanges time stamps between sensor nodes and the Spotlight device .\nIt is necessary for the system to reach consensus on global time through synchronization .\nDue to the uncertainty in hardware processing and wireless communication , we can only confine such errors within certain bounds ( e.g. one jiffy ) .\nAn imprecise input to the Localization Function L ( T ) leads to an error in node localization .\n- Uncertainty in Detection : the sampling rate of the sensor nodes is finite , consequently , there will be an unpredictable delay between the time when an event is truly present and when the sensor node detects it .\nLower sampling rates will generate larger localizations errors .\n- Size of the Event : the events distributed by the Spotlight device can not be infinitely small .\nIf a node detects one event , it is hard for it to estimate the exact location of itself within the event .\n- Realization of Event Distribution Function : EDF defines locations of events at time t. Due to the limited accuracy ( e.g. mechanical imprecision ) , a Spotlight device might generate events which locate differently from where these events are supposed to be .\nIt is important to remark that the localization error is independent of the number of sensor nodes in the network .\nThis independence , as well as the aforementioned independence of the execution cost , indicate the very good scalability properties ( with the number of sensor nodes , but not with the area of deployment ) that the Spotlight system possesses .\n4 .\nSYSTEM IMPLEMENTATION\nFor our performance evaluation we implemented two Spotlight systems .\nUsing these two implementations we were able to investigate the full spectrum of Event Distribution techniques , proposed in Section 3 , at a reduced `` one time '' cost ( less than $ 1,000 ) .\nThe first implementation , called \u03bcSpotlight , had a short range ( 10-20 meters ) , however its capability of generating the entire spectrum of EDFs made it very useful .\nWe used this implementation mainly to investigate the capabilities of the Spotlight system and tune its performance .\nIt was not intended to represent the full solution , but only a scaled down version of the system .\nThe second implementation , the Spotlight system , had a much longer range ( as far as 6500m ) , but it was limited in the types of EDFs that it can generate .\nThe goal of this implementation was to show how the Spotlight system works in a real , outdoor environment , and show correlations with the experimental results obtained from the \u03bcSpotlight system implementation .\nIn the remaining part of this section , we describe how we implemented the three components ( Event Distribution , Event Detection and Localization functions ) of the Spotlight architecture , and the time synchronization protocol , a key component of our system .\n4.1 \u00b5Spotlight System\nThe first system we built , called \u03bcSpotlight , used as the Spotlight device , an Infocus LD530 projector connected to an IBM Thinkpad laptop .\nThe system is shown in Figure 10 .\nThe Event Distribution Function was implemented as a Java GUI .\nDue to the stringent timing requirements and the delay caused by the buffering in the windowing system of a PC , we used the Full-Screen Exclusive Mode API provided by Java2 .\nThis allowed us to bypass the windowing system and more precisely estimate the time when an event is displayed by the projector , hence a higher accuracy of timestamps of events .\nBecause of the 50Hz refresh rate of our projector , there was still an uncertainty in the time stamping of the events of 20msec .\nWe explored the possibility of using and modifying the Linux kernel to expose the vertical synch ( VSYNCH ) interrupt , generated by the displaying device after each screen refresh , out of the kernel mode .\nThe performance evaluation results showed , however , that this level of accuracy was not needed .\nThe sensor nodes that we used were Berkeley Mica2 motes equipped with MTS310 multi-sensor boards from Crossbow .\nThis sensor board contains a CdSe photo sensor which can detect the light from the projector .\nFigure 10 .\n\u03bcSpotlight system implementation\nWith this implementation of the Spotlight system , we were able to generate Point , Line and Area Scan events .\n4.2 Spotlight System\nThe second Spotlight system we built used , as the Spotlight device , diode lasers , a computerized telescope mount ( Celestron CG-5GT , shown in Figure 11 ) , and an IBM Thinkpad laptop .\nThe laptop was connected , through RS232 interfaces , to the telescope mount and to one XSM600CA [ 7 ] mote , acting as a base station .\nThe diode lasers we used ranged in power from 7mW to 35mW .\nThey emitted at 650nm , close to the point of highest sensitivity for CdSe photosensor .\nThe diode lasers were equipped with lenses that allowed us to control the divergence of the beam .\nFigure 11 .\nSpotlight system implementation\nThe telescope mount has worm gears for a smooth motion and high precision angular measurements .\nThe two angular measures that we used were the , so called , Alt ( from Altitude ) and Az ( from Azimuth ) .\nIn astronomy , the Altitude of a celestial object is its angular distance above or below the celestial horizon , and the Azimuth is the angular distance of an object eastwards of the meridian , along the horizon .\nThe laptop computer , through a Java GUI , controls the motion of the telescope mount , orienting it such that a full Point Scan of an area is performed , similar to the one described in Figure 3 ( b ) .\nFor each turning point i , the 3-tuple ( Alti and Azi angles and the timestamp ti ) is recorded .\nThe Spotlight system uses the timestamp received from a sensor node j , to obtain the angular measures Altj and Azj for its location .\nFor the sensor nodes , we used XSM motes , mainly because of their longer communication range .\nThe XSM mote has the photo sensor embedded in its main board .\nWe had to make minor adjustments to the plastic housing , in order to expose the photo sensor to the outside .\nThe same mote code , written in nesC , for TinyOS , was used for both \u00b5Spotlight and Spotlight system implementations .\n4.3 Event Detection Function D ( t )\nThe Event Detection Function aims to detect the beginning of an event and record the time when the event was observed .\nWe implemented a very simple detection function based on the observed maximum value .\nAn event i will be time stamped with time ti , if the reading from the photo sensor dti , fulfills the condition :\nwhere dmax is the maximum value reported by the photo sensor before ti and \u0394 is a constant which ensures that the first large detection gives the timestamp of the event ( i.e. small variations around the first large signal are not considered ) .\nHence \u0394 guarantees that only sharp changes in the detected value generate an observed event .\n4.4 Localization Function L ( T )\nThe Localization Function is implemented in the Java GUI .\nIt matches the timestamps created by the Event Distribution Function with those reported by the sensor nodes .\nThe Localization Function for the Point Scan EDF has as input a time sequence Ti = { t1 } , as reported by node i .\nThe function performs a simple search for the event with a timestamp closest to t1 .\nIf t1 is constrained by :\nwhere en and en +1 are two consecutive events , then the obtained location for node i is : tj ' \u2208 Ti and dij = 0 if tj ' \u2209 Ti .\nThe function performs a search for an event with an identical code .\nIf the following condition is true :\nwhere en is an event with code den , then the inferred location for node i is :\n4.5 Time Synchronization\nThe time synchronization in the Spotlight system consists of two parts : - Synchronization between sensor nodes : This is achieved through the Flooding Time Synchronization Protocol [ 18 ] .\nIn this protocol , synchronized nodes ( the root node is the only synchronized node at the beginning ) send time synchronization message to unsynchronized nodes .\nThe sender puts the time stamp into the synchronization message right before the bytes containing the time stamp are transmitted .\nOnce a receiver gets the message , it follows the sender 's time and performs the necessary calculations to compensate for the clock drift .\n- Synchronization between the sensor nodes and the Spotlight device : We implemented this part through a two-way handshaking between the Spotlight device and one node , used as the base station .\nThe sensor node is attached to the Spotlight device through a serial interface .\nFigure 12 .\nTwo-way synchronization\nAs shown in Figure 12 , let 's assume that the Spotlight device sends a synchronization message ( SYNC ) at local time T1 , the sensor node receives it at its local time T2 and acknowledges it at local time T3 ( both T2 and T3 are sent back through ACK ) .\nAfter the Spotlight device receives the ACK , at its local time T4 , the time synchronization can be achieved as follows :\nThe case for the Line Scan is treated similarly .\nThe input to the Localization Function is the time sequence Ti = { t1 , t2 } as reported by node i .\nIf the reported timestamps are constrained by :\nwhere en and en +1 are two consecutive events on the horizontal scan and em and em +1 are two consecutive events on vertical scan , then the inferred location for node i is :\nThe Localization Function for the Area Cover EDF has as input a timestamp set Ti = { ti1 , ti2 , ... , tin } of the n events , detected by node i .\nWe recall the notation for the set of m timestamps of events generated by the Spotlight device , T ' = { t1 ' , t2 ' , ... , tm ' } .\nA code di = di1di2 ... dim is then constructed for each node i , such that dij = 1 if\nWe note that Equation 11 assumes that the one trip delays are the same in both directions .\nIn practice this does not hold well enough .\nTo improve the performance , we separate the handshaking process from the timestamp exchanges .\nThe handshaking is done fast , through a 2 byte exchange between the Spotlight device and the sensor node ( the timestamps are still recorded , but not sent ) .\nAfter this fast handshaking , the recorded time stamps are exchanged .\nThe result indicates that this approach can significantly improve the accuracy of time synchronization .\n5 .\nPERFORMANCE EVALUATION\nIn this section we present the performance evaluation of the Spotlight systems when using the three event distribution functions , i.e. Point Scan , Line Scan and Area Cover , described in Section 3 .\nFor the \u00b5Spotlight system we used 10 Mica2 motes .\nThe sensor nodes were attached to a vertically positioned Veltex board .\nBy projecting the light to the sensor nodes , we are able to generate well controlled Point , Line and Area events .\nThe Spotlight device was able to generate events , i.e. project light patterns , covering an area of approximate size 180x140cm2 .\nThe screen resolution for the projector was 1024x768 , and the movement of the Point Scan and Line Scan techniques was done through increments ( in the appropriate direction ) of 10 pixels between events .\nEach experimental point was obtained from 10 successive runs of the localization procedure .\nEach set of 10 runs was preceded by a calibration phase , aimed at estimating the total delays ( between the Spotlight device and each sensor node ) in detecting an event .\nDuring the calibration , we created an event covering the entire sensor field ( illuminated the entire area ) .\nThe timestamp reported by each sensor node , in conjunction with the timestamp created by the Spotlight device were used to obtain the time offset , for each sensor node .\nMore sophisticated calibration procedures have been reported previously [ 35 ] .\nIn addition to the time offset , we added a manually configurable parameter , called bias .\nIt was used to best estimate the center of an event .\nFigure 13 .\nDeployment site for the Spotlight system\nFor the Spotlight system evaluation , we deployed 10 XSM motes in a football field .\nThe site is shown in Figure 13 ( laser beams are depicted with red arrows and sensor nodes with white dots ) .\nTwo sets of experiments were run , with the Spotlight device positioned at 46m and at 170m from the sensor field .\nThe sensor nodes were aligned and the Spotlight device executed a Point Scan .\nThe localization system computed the coordinates of the sensor nodes , and the Spotlight device was oriented , through a GoTo command sent to the telescope mount , towards the computed location .\nIn the initial stages of the experiments , we manually measured the localization error .\nFor our experimental evaluation , the metrics of interest were as follows : - Localization error , defined as the distance , between the real location and the one obtained from the Spotlight system .\n- Localization duration , defined as the time span between the first and last event .\n- Localization range , defined as the maximum distance between the Spotlight device and the sensor nodes .\n- A Localization Cost function Cost :{ { localization accuracy } , { localization duration } } \u2192 [ 0,1 ] quantifies the trade-off between the accuracy in localization and the localization duration .\nThe objective is to minimize the Localization Cost function .\nBy denoting with ei the localization error for the ith scenario , with di the localization duration for the ith scenario , with max ( e ) the maximum localization error , with max ( d ) the maximum localization duration , and with \u03b1 the importance factor , the Localization Cost function is formally defined as :\n- Localization Bias .\nThis metric is used to investigate the effectiveness of the calibration procedure .\nIf , for example , all computed locations have a bias in the west direction , a calibration factor can be used to compensate for the difference .\nThe parameters that we varied during the performance evaluation of our system were : the type of scanning ( Point , Line and Area ) , the size of the event , the duration of the event ( for Area Cover ) , the scanning speed , the power of the laser and the distance between the Spotlight device and sensor field , to estimate the range of the system .\n5.1 Point Scan - \u03bcSpotlight system\nIn this experiment , we investigated how the size of the event and the scanning speed affect the localization error .\nFigure 14 shows the mean localization errors with their standard deviations .\nIt can be observed , that while the scanning speed , varying between 35cm/sec and 87cm/sec has a minor influence on the localization accuracy , the size of the event has a dramatic effect .\nFigure 14 .\nLocalization Error vs. Event Size for the Point\nScan EDF\nThe obtained localization error varied from as little as 2cm to over 11cm for the largest event .\nThis dependence can be explained by our Event Detection algorithm : the first detection above a threshold gave the timestamp for the event .\nThe duration of the localization scheme is shown in Figure 15 .\nThe dependency of the localization duration on the size of the event and scanning speed is natural .\nA bigger event allows a reduction in the total duration of up to 70 % .\nThe localization duration is directly proportional to the scanning speed , as expected , and depicted in Figure 15 .\nFigure 15 .\nLocalization Duration vs. Event Size for the Point\nAn interesting trade-off is between the localization accuracy ( usually the most important factor ) , and the localization time ( important in environments where stealthiness is paramount ) .\nFigure 16 shows the Localization Cost function , for \u03b1 = 0.5 ( accuracy and duration are equally important ) .\nAs shown in Figure 16 , it can be observed that an event size of approximately 10-15cm ( depending on the scanning speed ) minimizes our Cost function .\nFor \u03b1 = 1 , the same graph would be a monotonically increasing function , while for \u03b1 = 0 , it would be monotonically decreasing function .\nFigure 16 .\nLocalization Cost vs. Event Size for the Point\nScan EDF\n5.2 Line Scan - \u03bcSpotlight system\nIn a similar manner to the Point Scan EDF , for the Line Scan EDF we were interested in the dependency of the localization error and duration on the size of the event and scanning speed .\nWe represent in Figure 17 the localization error for different event sizes .\nIt is interesting to observe the dependency ( concave shape ) of the localization error vs. the event size .\nMoreover , a question that should arise is why the same dependency was not observed in the case of Point Scan EDF .\nFigure 17 .\nLocalization Error vs. Event Size for the Line\nScan EDF\nThe explanation for this concave dependency is the existence of a bias in location estimation .\nAs a reminder , a bias factor was introduced in order to best estimate the central point of events that have a large size .\nWhat Figure 17 shows is the fact that the bias factor was optimal for an event size of approximately 7cm .\nFor events smaller and larger than this , the bias factor was too large , and too small , respectively .\nThus , it introduced biased errors in the position estimation .\nThe reason why we did not observe the same dependency in the case of the Point Scan EDF was that we did not experiment with event sizes below 7cm , due to the long time it would have taken to scan the entire field with events as small as 1.7 cm .\nThe results for the localization duration as a function of the size of the event are shown in Figure 18 .\nAs shown , the localization duration is directly proportional to the scanning speed .\nThe size of the event has a smaller influence on the localization duration .\nOne can remark the average localization duration of about 10sec , much shorter then the duration obtained in the Point Scan experiment .\nThe Localization Cost function dependency on the event size and scanning speed , for \u03b1 = 0.5 , is shown in Figure 19 .\nThe dependency on the scanning speed is very small ( the Cost Function achieves a minimum in the same 4-6cm range ) .\nIt is interesting to note that this 4-6cm optimal event size is smaller than the one observed in the case of Point Scan EDF .\nThe explanation for this is that the smaller localization duration observed in the Line Scan EDF , allowed a shift ( towards smaller event sizes ) in the total Localization Cost Function .\nFigure 18 .\nLocalization Duration vs. Event Size for the Line\nFigure 19 .\nCost Function vs. Event Size for the Line Scan\nEDF During our experiments with the Line Scan EDF , we observed evidence of a bias in location estimation .\nThe estimated locations for all sensor nodes exhibited different biases , for different event sizes .\nFor example , for an event size of 17.5 cm , the estimated location for sensor nodes was to the upper-left size of the actual location .\nThis was equivalent to an `` early '' detection , since our scanning was done from left to right and from top to bottom .\nThe scanning speed did not influence the bias .\nIn order to better understand the observed phenomena , we analyzed our data .\nFigure 20 shows the bias in the horizontal direction , for different event sizes ( the vertical bias was almost identical , and we omit it , due to space constraints ) .\nFrom Figure 20 , one can observe that the smallest observed bias , and hence the most accurate positioning , was for an event of size 7cm .\nThese results are consistent with the observed localization error , shown in Figure 17 .\nWe also adjusted the measured localization error ( shown in Figure 17 ) for the observed bias ( shown in Figure 20 ) .\nThe results of an ideal case of Spotlight Localization system with Line Scan EDF are shown in Figure 21 .\nThe errors are remarkably small , varying between 0.1 cm and 0.8 cm , with a general trend of higher localization errors for larger event sizes .\nFigure 20 .\nPosition Estimation Bias for the Line Scan EDF\nFigure 21 .\nPosition Estimation w/o Bias ( ideal ) , for the Line Scan EDF\n5.3 Area Cover - \u03bcSpotlight system\nIn this experiment , we investigated how the number of bits used to quantify the entire sensor field , affected the localization accuracy .\nIn our first experiment we did not use error correcting codes .\nThe results are shown in Figure 22 .\nFigure 22 .\nLocalization Error vs. Event Size for the Area Cover EDF\nOne can observe a remarkable accuracy , with localization error on the order of 0.3-0 .6 cm .\nWhat is important to observe is the variance in the localization error .\nIn the scenario where 12 bits were used , while the average error was very small , there were a couple of cases , where an incorrect event detection generated a larger than expected error .\nAn example of how this error can occur was described in Section 3.4 .\nThe experimental results , presented in Figure 22 , emphasize the need for error correction of the bit patterns observed and reported by the sensor nodes .\nThe localization duration results are shown in Figure 23 .\nIt can be observed that the duration is directly proportional with the number of bits used , with total durations ranging from 3sec , for the least accurate method , to 6-7sec for the most accurate .\nThe duration of an event had a small influence on the total localization time , when considering the same scenario ( same number of bits for the code ) .\nThe Cost Function dependency on the number of bits in the code , for \u03b1 = 0.5 , is shown in Figure 24 .\nGenerally , since the localization duration for the Area Scan can be extremely small , a higher accuracy in the localization is desired .\nWhile the Cost function achieves a minimum when 10 bits are used , we attribute the slight increase observed when 12 bits were used to the two 12bit scenarios where larger than the expected errors were observed , namely 6-7mm ( as shown in Figure 22 ) .\nFigure 23 .\nLocalization Duration vs. Event Size for the Area\nFigure 24 .\nCost Function vs. Event Size for the Area Cover\nFigure 25 .\nLocalization Error w / and w/o Error Correction\nThe two problematic scenarios ( shown in Figure 22 , where for 12-bit codes we observed errors larger than the event size , due to errors in detection ) were further explored by using error correction codes .\nAs described in Section 3.3 , we implemented an extended Golay ( 24 , 12 ) error correction mechanism in our location estimation algorithm .\nThe experimental results are depicted in Figure 25 , and show a consistent accuracy .\nThe scenario without error correction codes , is simply the same 12-bit code scenario , shown in Figure 22 .\nWe only investigated the 12-bit scenario , due to its match with the 12bit data required by the Golay encoding scheme ( extended Golay producing 24-bit codewords ) .\n5.4 Point Scan - Spotlight system\nIn this section we describe the experiments performed at a football stadium , using our Spotlight system .\nThe hardware that we had available allowed us to evaluate the Point Scan technique of the Spotlight system .\nIn our evaluation , we were interested to see the performance of the system at different ranges .\nFigures 26 and 27 show the localization error versus the event size at two different ranges : 46m and 170m .\nFigure 26 shows a remarkable accuracy in localization .\nThe errors are in the centimeter range .\nOur initial , manual measurements of the localization error were most of the time difficult to make , since the spot of the laser was almost perfectly covering the XSM mote .\nWe are able to achieve localization errors of a few centimeters , which only range-based localization schemes are able to achieve [ 25 ] .\nThe observed dependency on the size of the event is similar to the one observed in the \u03bcSpotlight system evaluation , and shown in Figure 14 .\nThis proved that the \u03bcSpotlight system is a viable alternative for investigating complex EDFs , without incurring the costs for the necessary hardware .\nFigure 26 .\nLocalization Error vs. Event Size for Spotlight system at 46m\nIn the experiments performed over a much longer distance between the Spotlight device and sensor network , the average localization error remains very small .\nLocalization errors of 510cm were measured , as Figure 27 shows .\nWe were simply amazed by the accuracy that the system is capable of , when considering that the Spotlight system operated over the length of a football stadium .\nThroughout our experimentation with the Spotlight system , we have observed localization errors that were simply offsets of real locations .\nSince the same phenomenon was observed when experimenting with the \u03bcSpotlight system , we believe that with auto-calibration , the localization error can be further reduced .\nFigure 27 .\nLocalization Error vs. Event Size for Spotlight system at 170m\nThe time required for localization using the Spotlight system with a Point Scan EDF , is given by : t = ( L * l ) / ( s * Es ) , where L and l are the dimensions of the sensor network field , s is the scanning speed , and Es is the size of the event .\nFigure 28 shows the time for localizing a sensor network deployed in an area of size of a football field using the Spotlight system .\nHere we ignore the message propagation time , from the sensor nodes to the Spotlight device .\nFrom Figure 28 it can be observed that the very small localization errors are prohibitively expensive in the case of the Point Scan .\nWhen localization errors of up to 1m are tolerable , localization duration can be as low as 4 minutes .\nLocalization durations of 5-10 minutes , and localization errors of 1m are currently state of art in the realm of range-free localization schemes .\nAnd these results are achieved by using the Point Scan scheme , which required the highest Localization Time , as it was shown in Table 1 .\nFigure 28 .\nLocalization Time vs. Event Size for Spotlight\nsystem One important characteristic of the Spotlight system is its range .\nThe two most important factors are the sensitivity of the photosensor and the power of the Spotlight source .\nWe were interested in measuring the range of our Spotlight system , considering our capabilities ( MTS310 sensor board and inexpensive , $ 12 - $ 85 , diode laser ) .\nAs a result , we measured the intensity of the laser beam , having the same focus , at different distances .\nThe results are shown in Figure 29 .\nFigure 29 .\nLocalization Range for the Spotlight system\nFrom Figure 29 , it can be observed that only a minor decrease in the intensity occurs , due to absorption and possibly our imperfect focusing of the laser beam .\nA linear fit of the experimental data shows that distances of up to 6500m can be achieved .\nWhile we do not expect atmospheric conditions , over large distances , to be similar to our 200m evaluation , there is strong evidence that distances ( i.e. altitude ) of 1000-2000m can easily be achieved .\nThe angle between the laser beam and the vertical should be minimized ( less than 45 \u00b0 ) , as it reduces the difference between the beam cross-section ( event size ) and the actual projection of the beam on the ground .\nIn a similar manner , we were interested in finding out the maximum size of an event , that can be generated by a COTS laser and that is detectable by the existing photosensor .\nFor this , we\nvaried the divergence of the laser beam and measured the light intensity , as given by the ADC count .\nThe results are shown in Figure 30 .\nIt can be observed that for the less powerful laser , an event size of 1.5 m is the limit .\nFor the more powerful laser , the event size can be as high as 4m .\nThrough our extensive performance evaluation results , we have shown that the Spotlight system is a feasible , highly accurate , low cost solution for localization of wireless sensor networks .\nFrom our experience with sources of laser radiation , we believe that for small and medium size sensor network deployments , in areas of less than 20,000 m2 , the Area Cover scheme is a viable solution .\nFor large size sensor network deployments , the Line Scan , or an incremental use of the Area Cover are very good options .\nFigure 30 .\nDetectable Event Sizes that can be generated by\nCOTS lasers 6 .\nOPTIMIZATIONS/LESSONS LEARNED\n6.1 Distributed Spotlight System\nThe proposed design and the implementation of the Spotlight system can be considered centralized , due to the gathering of the sensor data and the execution of the Localization Function L ( t ) by the Spotlight device .\nWe show that this design can easily be transformed into a distributed one , by offering two solutions .\nOne idea is to disseminate in the network , information about the path of events , generated by the EDF ( similar to an equation , describing a path ) , and let the sensor nodes execute the Localization Function .\nFor example , in the Line Scan scenario , if the starting and ending points for the horizontal and vertical scans , and the times they were reached , are propagated in the network , then any sensor in the network can obtain its location ( assuming a constant scanning speed ) .\nA second solution is to use anchor nodes which know their positions .\nIn the case of Line Scan , if three anchors are present , after detecting the presence of the two events , the anchors flood the network with their locations and times of detection .\nUsing the same simple formulas as in the previous scheme , all sensor nodes can infer their positions .\n6.2 Localization Overhead Reduction\nAnother requirement imposed by the Spotlight system design , is the use of a time synchronization protocol between the Spotlight device and the sensor network .\nRelaxing this requirement and imposing only a time synchronization protocol among sensor nodes is a very desirable objective .\nThe idea is to use the knowledge that the Spotlight device has about the speed with which the scanning of the sensor field takes place .\nIf the scanning speed is constant ( let 's call it s ) , then the time difference ( let 's call it \u0394t ) between the event detections of two sensor nodes is , in fact , an accurate measure of the range between them : d = s * \u0394t .\nHence , the Spotlight system can be used for accurate ranging of the distance between any pair of sensor nodes .\nAn important observation is that this ranging technique does not suffer from limitations of others : small range and directionality for ultrasound , or irregularity , fading and multipath for Received Signal Strength Indicator ( RSSI ) .\nAfter the ranges between nodes have been determined ( either in a centralized or distributed manner ) graph embedding algorithms can be used for a realization of a rigid graph , describing the sensor network topology .\n6.3 Dynamic Event Distribution Function E ( t )\nAnother system optimization is for environments where the sensor node density is not uniform .\nOne disadvantage of the Line Scan technique , when compared to the Area Cover , is the localization time .\nAn idea is to use two scans : one which uses a large event size ( hence larger localization errors ) , followed by a second scan in which the event size changes dynamically .\nThe first scan is used for identifying the areas with a higher density of sensor nodes .\nThe second scan uses a larger event in areas where the sensor node density is low and a smaller event in areas with a higher sensor node density .\nA dynamic EDF can also be used when it is very difficult to meet the power requirements for the Spotlight device ( imposed by the use of the Area Cover scheme in a very large area ) .\nIn this scenario , a hybrid scheme can be used : the first scan ( Point Scan ) is performed quickly , with a very large event size , and it is meant to identify , roughly , the location of the sensor network .\nSubsequent Area Cover scans will be executed on smaller portions of the network , until the entire deployment area is localized .\n6.4 Stealthiness\nOur implementation of the Spotlight system used visible light for creating events .\nUsing the system during the daylight or in a room well lit , poses challenges due to the solar or fluorescent lamp radiation , which generate a strong background noise .\nThe alternative , which we used in our performance evaluations , was to use the system in a dark room ( \u03bcSpotlight system ) or during the night ( Spotlight system ) .\nWhile using the Spotlight system during the night is a good solution for environments where stealthiness is not important ( e.g. environmental sciences ) for others ( e.g. military applications ) , divulging the presence and location of a sensor field , could seriously compromise the efficacy of the system .\nFigure 31 .\nFluorescent Light Spectra ( top ) , Spectral Response for CdSe cells ( bottom ) A solution to this problem , which we experimented with in the \u00b5Spotlight system , was to use an optical filter on top of the light\nsensor .\nThe spectral response of a CdSe photo sensor spans almost the entire visible domain [ 37 ] , with a peak at about 700nm ( Figure 31-bottom ) .\nAs shown in Figure 31-top , the fluorescent light has no significant components above 700nm .\nHence , a simple red filter ( Schott RG-630 ) , which transmits all light with wavelength approximately above 630nm , coupled with an Event Distribution Function that generates events with wavelengths above the same threshold , would allow the use of the system when a fluorescent light is present .\nA solution for the Spotlight system to be stealthy at night , is to use a source of infra-red radiation ( i.e. laser ) emitting in the range [ 750 , 1000 ] nm .\nFor a daylight use of the Spotlight system , the challenge is to overcome the strong background of the natural light .\nA solution we are considering is the use of a narrow-band optical filter , centered at the wavelength of the laser radiation .\nThe feasibility and the cost-effectiveness of this solution remain to be proven .\n6.5 Network Deployed in Unknown Terrain\nA further generalization is when the map of the terrain where the sensor network was deployed is unknown .\nWhile this is highly unlikely for many civil applications of wireless sensor network technologies , it is not difficult to imagine military applications where the sensor network is deployed in a hostile and unknown terrain .\nA solution to this problem is a system that uses two Spotlight devices , or equivalently , the use of the same device from two distinct positions , executing , from each of them , a complete localization procedure .\nIn this scheme , the position of the sensor node is uniquely determined by the intersection of the two location directions obtained by the system .\nThe relative localization ( for each pair of Spotlight devices ) will require an accurate knowledge of the 3 translation and 3 rigid-body rotation parameters for Spotlight 's position and orientation ( as mentioned in Section 3 ) .\nThis generalization is also applicable to scenarios where , due to terrain variations , there is no single aerial point with a direct line of sight to all sensor nodes , e.g. hilly terrain .\nBy executing the localization procedure from different aerial points , the probability of establishing a line of sight with all the nodes , increases .\nFor some military scenarios [ 1 ] [ 12 ] , where open terrain is prevalent , the existence of a line of sight is not a limiting factor .\nIn light of this , the Spotlight system can not be used in forests or indoor environments .\n7 .\nCONCLUSIONS AND FUTURE WORK\nIn this paper we presented the design , implementation and evaluation of a localization system for wireless sensor networks , called Spotlight .\nOur localization solution does not require any additional hardware for the sensor nodes , other than what already exists .\nAll the complexity of the system is encapsulated into a single Spotlight device .\nOur localization system is reusable , i.e. the costs can be amortized through several deployments , and its performance is not affected by the number of sensor nodes in the network .\nOur experimental results , obtained from a real system deployed outdoors , show that the localization error is less than 20cm .\nThis error is currently state of art , even for range-based localization systems and it is 75 % smaller than the error obtained when using GPS devices or when the manual deployment of sensor nodes is a feasible option [ 31 ] .\nAs future work , we would like to explore the self-calibration and self-tuning of the Spotlight system .\nThe accuracy of the system can be further improved if the distribution of the event , instead of a single timestamp , is reported .\nA generalization could be obtained by reformulating the problem as an angular estimation problem that provides the building blocks for more general localization techniques ."} {"id": "C-33", "title": "", "abstract": "", "keyphrases": ["context-awar", "context provid", "negoti", "context-awar comput", "concret negoti model", "distribut applic", "pervas comput", "reput", "context qualiti", "persuas argument"], "prmu": [], "lvl-1": "Rewards-Based Negotiation for Providing Context Information Bing Shi State Key Laboratory for Novel Software Technology NanJing University NanJing, China shibing@ics.nju.edu.cn Xianping Tao State Key Laboratory for Novel Software Technology NanJing University NanJing, China txp@ics.nju.edu.cn Jian Lu State Key Laboratory for Novel Software Technology NanJing University NanJing, China lj@nju.edu.cn ABSTRACT How to provide appropriate context information is a challenging problem in context-aware computing.\nMost existing approaches use a centralized selection mechanism to decide which context information is appropriate.\nIn this paper, we propose a novel approach based on negotiation with rewards to solving such problem.\nDistributed context providers negotiate with each other to decide who can provide context and how they allocate proceeds.\nIn order to support our approach, we have designed a concrete negotiation model with rewards.\nWe also evaluate our approach and show that it indeed can choose an appropriate context provider and allocate the proceeds fairly.\nCategories and Subject Descriptors C.2.4 [Distributed Systems]: Distributed applicationsproviding context information General Terms Context 1.\nINTRODUCTION Context-awareness is a key concept in pervasive computing.\nContext informs both recognition and mapping by providing a structured, unified view of the world in which the system operates [1].\nContext-aware applications exploit context information, such as location, preferences of users and so on, to adapt their behaviors in response to changing requirements of users and pervasive environments.\nHowever, one specific kind of context can often be provided by different context providers (sensors or other data sources of context information) with different quality levels.\nFor example, in a smart home, thermometer A``s measurement precision is 0.1 \u25e6 C, and thermometer B``s measurement precision is 0.5 \u25e6 C. Thus A could provide more precise context information about temperature than B. Moreover, sometimes different context providers may provide conflictive context information.\nFor example, different sensors report that the same person is in different places at the same time.\nBecause context-aware applications utilize context information to adapt their behaviors, inappropriate context information may lead to inappropriate behavior.\nThus we should design a mechanism to provide appropriate context information for current context-aware applications.\nIn pervasive environments, context providers considered as relatively independent entities, have their own interests.\nThey hope to get proceeds when they provide context information.\nHowever, most existing approaches consider context providers as entities without any personal interests, and use a centralized arbitrator provided by the middleware to decide who can provide appropriate context.\nThus the burden of the middleware is very heavy, and its decision may be unfair and harm some providers'' interests.\nMoreover, when such arbitrator is broken down, it will cause serious consequences for context-aware applications.\nIn this paper, we let distributed context providers themselves decide who provide context information.\nSince high reputation could help providers get more opportunities to provide context and get more proceeds in the future, providers try to get the right to provide good context to enhance their reputation.\nIn order to get such right, context providers may agree to share some portion of the proceeds with its opponents.\nThus context providers negotiate with each other to reach agreement on the issues who can provide context and how they allocate the proceeds.\nOur approach has some specific advantages: 1.\nWe do not need an arbitrator provided by the middleware of pervasive computing to decide who provides context.\nThus it will reduce the burden of the middleware.\n2.\nIt is more reasonable that distributed context providers decide who provide context, because it can avoid the serious consequences caused by a breakdown of a centralized arbitrator.\n3.\nIt can guarantee providers'' interests and provide fair proceeds allocation when providers negotiate with each other to reach agreement on their concerned problems.\n4.\nThis approach can choose an appropriate provider automatically.\nIt does not need any applications and users'' intervention.\nThe negotiation model we have designed to support our approach is also a novel model in negotiation domain.\nThis model can help negotiators reach agreement in the present negotiation process by providing some guarantees over the outcome of next negotiation process (i.e. rewards).\nNegotiator may find current offer and reward worth more than counter-offer which will delay the agreement, and accepts current offer and reward.\nWithout the reward, it may find current offer worth less than the counter-offer, and proposes its counter-offer.\nIt will cost more time to reach agreement.\nIt also expands the negotiation space considered in present negotiation process, and therefore provides more possibilities to find better agreement.\nThe remainder of this paper is organized as follows.\nSection 2 presents some assumptions.\nSection 3 describes our approach based on negotiation detailedly, including utility functions, negotiation protocol and context providers'' strategies.\nSection 4 evaluates our approach.\nIn section 5 we introduce some related work and conclude in section 6.\n2.\nSOME ASSUMPTIONS Before introducing our approach, we would like to give some assumptions: 1.\nAll context providers are well-meaning and honest.\nDuring the negotiation process, they exchange information honestly.\nRewards confirmed in this negotiation process will be fulfilled in the next negotiation process.\n2.\nAll providers must guarantee the system``s interests.\nThey should provide appropriate context information for current applications.\nAfter guaranteeing the system``s interest, they can try to maximize their own personal interests.\nThe assumption is reasonable, because when an inappropriate context provider gets the right to provide bad context, as a punishment, its reputation will decrease, and the proceeds is also very small.\n3.\nAs context providers are independent, factors which influence their negotiation stance and behavior are private and not available to their opponents.\nTheir utility functions are also private.\n4.\nSince the negotiation takes place in pervasive environments, time is a critical factors.\nThe current application often hopes to get context information as quickly as possible, so the time cost to reach agreement should be as short as possible.\nContext providers often have strict deadline by when the negotiation must be completed.\nAfter presenting these assumptions, we will propose our approach based on negotiation with rewards in the next section.\n3.\nOUR APPROACH In the beginning, we introduce the concepts of reputation and Quality of Context (QoC) attributes.\nBoth will be used in our approach.\nReputation of an agent is a perception regarding its behavior norms, which is held by other agents, based on experiences and observation of its past actions [7].\nHere agent means context provider.\nEach provider``s reputation indicates its historical ability to provide appropriate context information.\nQuality of Context (QoC) attributes characterize the quality of context information.\nWhen applications require context information, they should specify their QoC requirements which express constraints of QoC attributes.\nContext providers can specify QoC attributes for the context information they deliver.\nAlthough we can decide who provides appropriate context according to QoC requirements and context providers'' QoC information, applications'' QoC requirements might not reflect the actual quality requirements.\nThus, in addition to QoC, reputation information of context providers is another factor affecting the decision who can provide context information.\nNegotiation is a process by which a joint decision is made by two or more parties.\nThe parties first verbalize contradictory demands and then move towards agreement by a process of concession making or search for new alternatives [2].\nIn pervasive environments, all available context providers negotiate with each other to decide who can provide context information.\nThis process will be repeated because a kind of context is needed more than one time.\nNegotiation using persuasive arguments (such as threats, promises of future rewards, and appeals) allows negotiation parties to influence each others'' preferences to reach better deals effectively and efficiently [9].\nThis pervasive negotiation is effective in repeated interaction because arguments can be constructed to directly impact future encounters.\nIn this paper, for simplicity, we let negotiation take place between two providers.\nWe extend Raiffa``s basic model for bilateral negotiation [8], and allow negotiators to negotiate with each other by exchanging arguments in the form of promises of future rewards or requests for future rewards.\nRewards mean some extra proceeds in the next negotiation process.\nThey can influence outcomes of current and future negotiation.\nIn our approach, as described by Figure 1, the current application requires Context Manager to provide a specific type of context information satisfying QoC requirements.\nContext Manager finds that provider A and B can provide such kind of context with different quality levels.\nThen the manager tells A and B to negotiate to reach agreement on who can provide the context information and how they will allocate the proceeds.\nBoth providers get reputation information from the database Reputation of Context Providers and QoC requirements, and then negotiate with each other according to our negotiation model.\nWhen negotiation is completed, the chosen provider will provide the context information to Context Manager, and then Context Manager delivers such information to the application and also stores it in Context Knowledge Base where current and historical context information is stored.\nThe current application gives the feedback information about the provided context, and then Context Manager will update the chosen provider``s reputation information according to the feedback information.\nContext Manager also provides the proceeds to providers according to the feedback information and the time cost on negotiation.\nIn the following parts of this section, we describe our negotiation model in detail, including context providers'' utility functions to evaluate offers and rewards, negotiation protocol, and strategies to generate offers and rewards.\nContext Knowledge Base Reputation of Context Providers Context provider A Context Manager Negotiate Application``s QoC requirements and feedback Provide QoC requirements and proceeds Manage Context Provide Context Getreputation Getreputation Update reputation information according to feedback Context provider B Figure 1: Negotiate to provide appropriate context information.\n3.1 Utility function During the negotiation process, one provider proposes an offer and a reward to the other provider.\nAn offer is noted as o = (c, p): c indicates the chosen context provider and its domain is Dc (i.e. the two context providers participating in the negotiation); p means the proposer``s portion of the proceeds, and its domain is Dp = [0,1].\nIts opponent``s portion of the proceeds is 1\u2212p.\nThe reward ep``s domain is Dep = [-1,1], and |ep| means the extra portion of proceeds the proposer promises to provide or requests in the next negotiation process.\nep < 0 means the proposer promises to provide reward, ep > 0 means the proposer requests reward and ep =0 means no reward.\nThe opponent evaluates the offer and reward to decide to accept them or propose a counter-offer and a reward.\nThus context providers should have utility functions to evaluate offers and rewards.\nTime is a critical factor, and only at times in the set T = {0, 1, 2, ... tdeadline}, context providers can propose their offers.\nThe set O include all available offers.\nContext provider A``s utility function of the offer and reward at time t UA : O \u00d7 Dep \u00d7 T \u2192 [\u22121, 1] is defined as: UA(o,ep,t)=(wA 1 \u00b7UA c (c)+wA 2 \u00b7UA p (p)+wA 3 \u00b7UA ep(ep))\u00b7\u03b4A(t) (1) Similarly, the utility function of A``s opponent (i.e. B) can be defined as: UB(o,ep,t)=(wB 1 \u00b7UB c (c)+wB 2 \u00b7UB p (1\u2212p)+wB 3 \u00b7UB ep(\u2212ep))\u00b7\u03b4B(t) In (1), wA 1 , wA 2 and wA 3 are weights given to c, p and ep respectively, and wA 1 + wA 2 + wA 3 =1.\nUsually, the context provider pays the most attention to the system``s interests, pays the least attention to the reward, thus wA 1 > wA 2 > wA 3 .\nUA c : Dc \u2192 [\u22121, 1] is the utility function of the issue who provides context.\nThis function is determined by two factors: the distance between c``s QoC and current application``s QoC requirements, and c``s reputation.\nThe two negotiators acquire c``s QoC information from c, and we use the approach proposed in [4] to calculate the distance between c``s QoC and the application``s Qoc requirements.\nThe required context has n QoC attributes and let the application``s wishes for this context be a = (a1, a2 ... an) (where ai = means the application``s indifference to the i-th QoC attribute), c``s QoC attributes cp = (cp1, cp2 ... cpn) (where cpi = means c``s inability to provide a quantitative value for the i-th QoC attribute).\nBecause numerical distance values of different properties are combined, e.g. location precision in metres with refresh rate in Hz, thus a standard scale for all dimension is needed.\nThe scaling factors for the QoC attributes are s = (s1, s2 ... sn).\nIn addition, different QoC attributes may have different weights: w = (w1, w2 ... wn).\nThen d = (d1, d2 ... dn) di = (cpi \u2212 ai) \u00b7 si \u00b7 wi where cpi\u2212ai = 0 for ai = and cpi\u2212ai = o(ai) for cpi = ( o(.)\ndetermines the application``s satisfaction or dissatisfaction when c is unable to provide an estimate of a QoC attribute, given the value wished for by the application).\nThe distance can be linear distance (1-norm), Euclidean distance (2-norm), or the maximum distance (max-norm): |d| = |d1| + |d2| + ... + |dn| (1 \u2212 norm) ||d||2 = |d1|2 + |d2|2 + ... + |dn|2 (2 \u2212 norm) ||d||\u221e = max{|d1|, |d2| ... |dn|} (max \u2212 norm) The detail description of this calculation can be found in [4].\nReputation of c can be acquired from the database Reputation of Context Providers.\nUA c (c) : R \u00d7 Drep \u2192 [\u22121, 1] can be defined as: UA c (c) = wA c1 \u00b7 UA d (d) + wA c2 \u00b7 UA rep(rep) wA c1 and wA c2 are weights given to the distance and reputation respectively, and wA c1 + wA c2 = 1.\nDrep is the domain of reputation information.\nUA d : R \u2192 [0, 1] is a monotonedecreasing function and UA rep : Drep \u2192 [\u22121, 1] is a monotoneincreasing function.\nUA p : Dp \u2192 [0, 1] is the utility function of the portion of proceeds A will receive and it is also a monotone-increasing function.\nA``s utility function of reward ep UA ep : Dep \u2192 [\u22121, 1] is also a monotone-increasing function and UA ep(0) = 0.\n\u03b4A : T \u2192 [0, 1] is the time discount function.\nIt is also a monotone-decreasing function.\nWhen time t cost on negotiation increases, \u03b4A(t) will decrease, and the utility will also decrease.\nThus both negotiators want to reach agreement as quickly as possible to avoid loss of utility.\n3.2 Negotiation protocol When provider A and B have got QoC requirements and reputation information, they begin to negotiate.\nThey first set their reserved (the lowest acceptable) utility which can guarantee the system``s interests and their personal interests.\nWhen the context provider finds the utility of an offer and a reward is lower than its reserved utility, it will reject this proposal and terminate the negotiation process.\nThe provider who starts the negotiation is chosen randomly.\nWe assume A starts the negotiation, and it proposes offer o and reward ep to B according to its strategy (see subsection 3.3).\nWhen B receives the proposal from A, it uses its utility function to evaluate it.\nIf it is lower than its reserved utility, the provider terminates the negotiation.\nOtherwise, if UB(o, ep, t) \u2265 UB(o , ep , t + 1) i.e. the utility of o and ep proposed by A at time t is greater than the utility of offer o'' and reward ep'' which B will propose to A at time t + 1, B will accept this offer and reward.\nThe negotiation is completed.\nHowever, if UB(o, ep, t) < UB(o , ep , t + 1) then B will reject A``s proposal, and propose its counter-offer and reward to A.\nWhen A receives B``s counter-offer and reward, A evaluates them using its utility function, and compares the utility with the utility of offer and reward it wants to propose to B at time t+2, decides to accept it or give its counter-offer and reward.\nThis negotiation process continues and in each negotiation round, context providers concede in order to reach agreement.\nThe negotiation will be successfully finished when agreement is reached, or be terminated forcibly due to deadline or the utility lower than reserved utility.\nWhen negotiation is forced to be terminated, Context manager will ask A and B to calculate UA c (A), UA c (B), UB c (A) and UB c (B) respectively.\nIf UA c (A) + UB c (A) > UA c (B) + UB c (B) Context Manager let A provide context.\nIf UA c (A) + UB c (A) < UA c (B) + UB c (B) then B will get the right to provide context information.\nWhen UA c (A) + UB c (A) = UA c (B) + UB c (B) Context Manager will select a provider from A and B randomly.\nIn addition, Context Manager allocates the proceeds between the two providers.\nAlthough we can select one provider when negotiation is terminated forcibly, however, this may lead to the unfair allocation of the proceeds.\nMoreover, more time negotiators cost on negotiation, less proceeds will be given.\nThus negotiators will try to reach agreement as soon as possible in order to avoid unnecessary loss.\nWhen the negotiation is finished, the chosen provider provides the context information to Context Manager which will deliver the information to current application.\nAccording to the application``s feedback information about this context, Context Manager updates the provider``s reputation stored in Reputation of Context Providers.\nThe provider``s reputation may be enhanced or decreased.\nIn addition, according to the feedback and the negotiation time, Context Manager will give proceeds to the provider.\nThen the provider will share the proceeds with its opponent according to the negotiation outcome and the reward confirmed in the last negotiation process.\nFor example, in the last negotiation process A promised to give reward ep (0 \u2264 ep < 1) to B, and A``s portion of the proceeds is p in current negotiation.\nThen A``s actual portion of the proceeds is p \u00b7 (1 \u2212 ep), and its opponent B``s portion of the proceeds is 1\u2212p+p\u00b7ep.\n3.3 Negotiation strategy The context provider might want to pursue the right to provide context information blindly in order to enhance its reputation.\nHowever when it finally provides bad context information, its reputation will be decreased and the proceeds is also very small.\nThus the context provider should take action according to its strategy.\nThe aim of provider``s negotiation strategy is to determine the best course of action which will result in a negotiation outcome maximizing its utility function (i.e how to generate an offer and a reward).\nIn our negotiation model, the context provider generates its offer and reward according to its pervious offer and reward and the last one sent by its opponent.\nAt the beginning of the negotiation, context providers initialize their offers and rewards according to their beliefs and their reserved utility.\nIf context provider A considers that it can provide good context and wants to enhance reputation, then it will propose that A provides the context information, shares some proceeds with its opponent B, and even promises to give reward.\nHowever, if A considers that it may provide bad context, A will propose that its opponent B provide the context, and require B to share some proceeds and provide reward.\nDuring the negotiation process, we assume that at time t A proposes offer ot and reward ept to B, at time t + 1, B proposes counter-offer ot+1 and reward ept+1 to A.\nThen at time t + 2, when the utility of B``s proposal is greater than A``s reserved utility, A gives its response.\nNow we calculate the expected utility to be conceded at time t +2, we use Cu to express the conceded utility.\nCu = (UA(ot, ept, t) \u2212 UA(ot+1, ept+1, t + 1)) \u00b7 cA(t + 2) (UA(ot, ept, t) > UA(ot+1, ept+1, t + 1), otherwise, A will accept B``s proposal) where cA : T \u2192 [0, 1] is a monotoneincreasing function.\ncA(t) indicates A``s utility concession rate1 .\nA concedes a little in the beginning before conceding significantly towards the deadline.\nThen A generates its offer ot+2 = (ct+2, pt+2) and reward ept+2 at time t + 2.\nThe expected utility of A at time t + 2 is: UA(ot+2, ept+2, t + 2) = UA(ot, ept, t + 2) \u2212 Cu If UA(ot+2, ept+2, t + 2) \u2264 UA(ot+1, ept+1, t + 1) then A will accept B``s proposal (i.e. ot+1 and ept+1).\nOtherwise, A will propose its counter-offer and reward based on Cu.\nWe assume that Cu is distributed evenly on c, p and ep (i.e. the utility to be conceded on c, p and ep is 1 3 Cu respectively).\nIf |UA c (ct)\u2212(UA c (ct)\u2212 1 3 Cu \u03b4A(t+2) )| \u2264 |UA c (ct+1)\u2212(UA c (ct)\u2212 1 3 Cu \u03b4A(t+2) )| i.e. the expected utility of c at time t+2 is UA c (ct)\u2212 1 3 Cu \u03b4A(t+2) and it is closer to the utility of A``s proposal ct at time t, then at time t + 2, ct+2 = ct, else the utility is closer to B``proposal ct+1 and ct+2 = ct+1.\nWhen ct+2 is equal to ct, the actual conceded utility of c is 0, and the total concession of p and ep is Cu.\nWe divide the total concession of p and ep evenly, and get the conceded utility of p and ep respectively.\nWe calculate pt+2 and ept+2 as follows: pt+2 = (UA p )\u22121 (UA p (pt) \u2212 1 2 Cu \u03b4A(t + 2) ) ept+2 = (UA ep)\u22121 (UA ep(ept) \u2212 1 2 Cu \u03b4A(t + 2) ) When ct+2 is equal to ct+1, the actual conceded utility of c is |UA c (ct+2) \u2212 UA c (ct)|, the total concession of p and ep is Cu \u03b4A(t+2) \u2212 |UA c (ct+2) \u2212 UA c (ct)|, then: pt+2 = (UA p )\u22121 (UA p (pt)\u2212 1 2 ( Cu \u03b4A(t + 2) \u2212|UA c (ct+2)\u2212UA c (ct)|)) ept+2 = (UA ep)\u22121 (UA ep(ept)\u22121 2 ( Cu \u03b4A(t+2) \u2212|UA c (ct+2)\u2212UA c (ct)|)) Now, we have generated the offer and reward A will propose at time t + 2.\nSimilarly, B also can generate its offer and reward.\n1 For example, cA(t) = ( t tdeadline ) 1 \u03b2 (0 < \u03b2 < 1) Utility function and weight of c, p and ep Uc, w1 Up, w2 Uep, w3 A 0.5(1 \u2212 dA 500 ) + 0.5repA 1000 , 0.6 0.9p, 0.3 0.9ep, 0.1 B 0.52(1 \u2212 dB 500 ) + 0.48repB 1000 , 0.5 0.9p, 0.45 0.8ep, 0.05 Table 1: Utility functions and weights of c, p and ep for each provider 4.\nEVALUATION In this section, we evaluate the effectiveness of our approach by simulated experiments.\nContext providers A and B negotiate to reach agreement.\nThey get QoC requirements and calculate the distance between Qoc requirements and their QoC.\nFor simplicity, in our experiments, we assume that the distance has been calculated, and dA represents distance between QoC requirements and A``s QoC, dB represents distance between QoC requirements and B``s QoC.\nThe domain of dA and dB is [0,500].\nWe assume reputation value is a real number and its domain is [-1000, 1000], repA represents A``s reputation value and repB represents B``s reputation value.\nWe assume that both providers pay the most attention to the system``s interests, and pay the least attention to the reward, thus w1 > w2 > w3, and the weight of Ud approximates the weight of Urep.\nA and B``s utility functions and weights of c, p and ep are defined in Table 1.\nWe set deadline tdeadline = 100, and define time discount function \u03b4(t) and concession rate function c(t) of A and B as follows: \u03b4A(t) = 0.9t \u03b4B(t) = 0.88t cA(t) = ( t tdeadline ) 1 0.8 cB(t) = ( t tdeadline ) 1 0.6 Given different values of dA, dB, repA and repB, A and B negotiate to reach agreement.\nThe provider that starts the negotiation is chosen at random.\nWe hope that when dA dB and repA repB, A will get the right to provide context and get a major portion of the proceeds, and when \u2206d = dA \u2212 dB is in a small range (e.g. [-50,50]) and \u2206rep = repA \u2212 repB is in a small range (e.g. [-50,50]), A and B will get approximately equal opportunities to provide context, and allocate the proceeds evenly.\nWhen dA\u2212dB 500 approximates to dA\u2212dB 1000 (i.e. the two providers'' abilities to provide context information are approximately equal), we also hope that A and B get equal opportunities to provide context and allocate the proceeds evenly.\nAccording to the three situations above, we make three experiments as follows: Experiment 1 : In this experiment, A and B negotiate with each other for 50 times, and at each time, we assign different values to dA, dB, repA, repB (satisfying dA dB and repA repB) and the reserved utilities of A and B.\nWhen the experiment is completed, we find 3 negotiation games are terminated due to the utility lower than the reserved utility.\nA gets the right to provide context for 47 times.\nThe average portion of proceeds A get is about 0.683, and B``s average portion of proceeds is 0.317.\nThe average time cost to reach agreement is 8.4.\nWe also find that when B asks A to provide context in its first offer, B can require and get more portion of the proceeds because of its goodwill.\nExperiment 2 : A and B also negotiate with each other for 50 times in this experiment given different values of dA, dB, repA, repB (satisfying \u221250 \u2264 \u2206d = dA \u2212 dB \u2264 50 and \u221250 \u2264 \u2206rep = drep \u2212drep \u2264 50) and the reserved utilities of A and B.\nAfter the experiment, we find that there are 8 negotiation games terminated due to the utility lower than the reserved utility.\nA and B get the right to provide context for 20 times and 22 times respectively.\nThe average portion of proceeds A get is 0.528 and B``s average portion of the proceeds is 0.472.\nThe average time cost on negotiation is 10.5.\nExperiment 3 : In this experiment, A and B also negotiate with each other for 50 times given dA, dB, repA, repB (satisfying \u22120.2 \u2264 dA\u2212dB 500 \u2212 dA\u2212dB 1000 \u2264 0.2) and the reserved utilities of A and B.\nThere are 6 negotiation games terminated forcibly.\nA and B get the right to provide context for 21 times and 23 times respectively.\nThe average portion of proceeds A get is 0.481 and B``s average portion of the proceeds is 0.519.\nThe average time cost on negotiation is 9.2.\nOne thing should be mentioned is that except for d, rep, p and ep, other factors (e.g. weights, time discount function \u03b4(t) and concession rate function c(t)) could also affect the negotiation outcome.\nThese factors should be adjusted according to providers'' beliefs at the beginning of each negotiation process.\nIn our experiments, for similarity, we assign values to them without any particularity in advance.\nThese experiments'' results prove that our approach can choose an appropriate context provider and can provide a relatively fair proceeds allocation.\nWhen one provider is obviously more appropriate than the other provider, the provider will get the right to provide context and get a major portion of the proceeds.\nWhen both providers have the approximately same abilities to provide context, their opportunities to provide context are equal and they can get about a half portion of the proceeds respectively.\n5.\nRELATED WORK In [4], Huebscher and McCann have proposed an adaptive middleware design for context-aware applications.\nTheir adaptive middleware uses utility functions to choose the best context provider (given the QoC requirements of applications and the QoC of alternative means of context acquisition).\nIn our negotiation model, the calculation of utility function Uc was inspired by this approach.\nHenricksen and Indulska propose an approach to modelling and using imperfect information in [3].\nThey characterize various types and sources of imperfect context information and present a set of novel context modelling constructs.\nThey also outline a software infrastructure that supports the management and use of imperfect context information.\nJudd and Steenkiste in [5] describe a generic interface to query context services allowing clients to specify their quality requirements as bounds on accuracy, confidence, update time and sample interval.\nIn [6], Lei et al. present a context service which accepts freshness and confidence meta-data from context sources, and passes this along to clients so that they can adjust their level of trust accordingly.\n[10] presents a framework for realizing dynamic context consistency management.\nThe framework supports inconsistency detection based on a semantic matching and inconsistency triggering model, and inconsistency resolution with proactive actions to context sources.\nMost approaches to provide appropriate context utilize a centralized arbitrator.\nIn our approach, we let distributed context providers themselves decide who can provide appropriate context information.\nOur approach can reduce the burden of the middleware, because we do not need the middleware to provide a context selection mechanism.\nIt can avoid the serious consequences caused by a breakdown of the arbitrator.\nAlso, it can guarantee context providers'' interests.\n6.\nCONCLUSION AND FUTURE WORK How to provide the appropriate context information is a challenging problem in pervasive computing.\nIn this paper, we have presented a novel approach based on negotiation with rewards to attempt to solve such problem.\nDistributed context providers negotiate with each other to reach agreement on the issues who can provide the appropriate context and how they allocate the proceeds.\nThe results of our experiments have showed that our approach can choose an appropriate context provider, and also can guarantee providers'' interests by a relatively fair proceeds allocation.\nIn this paper, we only consider how to choose an appropriate context provider from two providers.\nIn the future work, this negotiation model will be extended, and more than two context providers can negotiate with each other to decide who is the most appropriate context provider.\nIn the extended negotiation model, how to design efficient negotiation strategies will be a challenging problem.\nWe assume that the context provider will fulfill its promise of reward in the next negotiation process.\nIn fact, the context provider might deceive its opponent and provide illusive promise.\nWe should solve this problem in the future.\nWe also should deal with interactions which are interrupted by failing communication links in the future work.\n7.\nACKNOWLEDGEMENT The work is funded by 973 Project of China(2002CB312002, 2006CB303000), NSFC(60403014) and NSFJ(BK2006712).\n8.\nREFERENCES [1] J. Coutaz, J. L. Crowley, S. Dobson, and D. Garlan.\nContext is key.\nCommun.\nACM, 48(3):49 - 53, March 2005.\n[2] D.G.Pruitt.\nNegotiation behavior.\nAcademic Press, 1981.\n[3] K. Henricksen and J. Indulska.\nModelling and using imperfect context information.\nIn Proceedings of the Second IEEE Annual Conference on Pervasive Computing and Communications Workshops, pages 33-37, 2004.\n[4] M. C. Huebscher and J. A. McCann.\nAdaptive middleware for context-aware applications in smart-homes.\nIn Proceedings of the 2nd workshop on Middleware for pervasive and ad-hoc computing MPAC ``04, pages 111-116, October 2004.\n[5] G. Judd and P. Steenkiste.\nProviding contextual information to pervasive computing applications.\nIn Proceedings of the First IEEE International Conference on Pervasive Computing and Communications, pages 133-142, 2003.\n[6] H. Lei, D. M. Sow, J. S. Davis, G. Banavar, and M. R. Ebling.\nThe design and applications of a context service.\nACM SIGMOBILE Mobile Computing and Communications Review, 6(4):45-55, 2002.\n[7] J. Liu and V. Issarny.\nEnhanced reputation mechanism for mobile ad-hoc networks.\nIn Trust Management: Second International Conference, iTrust, 2004.\n[8] H. Raiffa.\nThe Art and Science of Negotiation.\nHarvard University Press, 1982.\n[9] S. D. Ramchurn, N. R. Jennings, and C. Sierra.\nPersuasive negotiation for autonomous agents: A rhetorical approach.\nIn C. Reed, editor, Workshop on the Computational Models of Natural Argument, IJCAI, pages 9-18, 2003.\n[10] C. Xu and S. C. Cheung.\nInconsistency detection and resolution for context-aware middleware support.\nIn Proceedings of the 10th European software engineering conference, pages 336-345, 2005.", "lvl-3": "Rewards-Based Negotiation for Providing Context Information\nABSTRACT\nHow to provide appropriate context information is a challenging problem in context-aware computing .\nMost existing approaches use a centralized selection mechanism to decide which context information is appropriate .\nIn this paper , we propose a novel approach based on negotiation with rewards to solving such problem .\nDistributed context providers negotiate with each other to decide who can provide context and how they allocate proceeds .\nIn order to support our approach , we have designed a concrete negotiation model with rewards .\nWe also evaluate our approach and show that it indeed can choose an appropriate context provider and allocate the proceeds fairly .\n1 .\nINTRODUCTION\nContext-awareness is a key concept in pervasive computing .\nContext informs both recognition and mapping by providing a structured , unified view of the world in which the system operates [ 1 ] .\nContext-aware applications exploit context information , such as location , preferences of users and so on , to adapt their behaviors in response to changing requirements of users and pervasive environments .\nHowever , one specific kind of context can often be provided by different context providers ( sensors or other data sources of context information ) with different quality levels .\nFor example ,\nin a smart home , thermometer A 's measurement precision is 0.1 \u00b0 C , and thermometer B 's measurement precision is 0.5 \u00b0 C. Thus A could provide more precise context information about temperature than B. Moreover , sometimes different context providers may provide conflictive context information .\nFor example , different sensors report that the same person is in different places at the same time .\nBecause context-aware applications utilize context information to adapt their behaviors , inappropriate context information may lead to inappropriate behavior .\nThus we should design a mechanism to provide appropriate context information for current context-aware applications .\nIn pervasive environments , context providers considered as relatively independent entities , have their own interests .\nThey hope to get proceeds when they provide context information .\nHowever , most existing approaches consider context providers as entities without any personal interests , and use a centralized `` arbitrator '' provided by the middleware to decide who can provide appropriate context .\nThus the burden of the middleware is very heavy , and its decision may be unfair and harm some providers ' interests .\nMoreover , when such `` arbitrator '' is broken down , it will cause serious consequences for context-aware applications .\nIn this paper , we let distributed context providers themselves decide who provide context information .\nSince high reputation could help providers get more opportunities to provide context and get more proceeds in the future , providers try to get the right to provide `` good '' context to enhance their reputation .\nIn order to get such right , context providers may agree to share some portion of the proceeds with its opponents .\nThus context providers negotiate with each other to reach agreement on the issues who can provide context and how they allocate the proceeds .\nOur approach has some specific advantages :\n1 .\nWe do not need an `` arbitrator '' provided by the middleware of pervasive computing to decide who provides context .\nThus it will reduce the burden of the middleware .\n2 .\nIt is more reasonable that distributed context providers decide who provide context , because it can avoid the serious consequences caused by a breakdown of a centralized `` arbitrator '' .\n3 .\nIt can guarantee providers ' interests and provide fair proceeds allocation when providers negotiate with each other to reach agreement on their concerned problems .\n4 .\nThis approach can choose an appropriate provider au\ntomatically .\nIt does not need any applications and users ' intervention .\nThe negotiation model we have designed to support our approach is also a novel model in negotiation domain .\nThis model can help negotiators reach agreement in the present negotiation process by providing some guarantees over the outcome of next negotiation process ( i.e. rewards ) .\nNegotiator may find current offer and reward worth more than counter-offer which will delay the agreement , and accepts current offer and reward .\nWithout the reward , it may find current offer worth less than the counter-offer , and proposes its counter-offer .\nIt will cost more time to reach agreement .\nIt also expands the negotiation space considered in present negotiation process , and therefore provides more possibilities to find better agreement .\nThe remainder of this paper is organized as follows .\nSection 2 presents some assumptions .\nSection 3 describes our approach based on negotiation detailedly , including utility functions , negotiation protocol and context providers ' strategies .\nSection 4 evaluates our approach .\nIn section 5 we introduce some related work and conclude in section 6 .\n2 .\nSOME ASSUMPTIONS\n3 .\nOUR APPROACH\n3.1 Utility function\n3.2 Negotiation protocol\n3.3 Negotiation strategy\n4 .\nEVALUATION\n5 .\nRELATED WORK\nIn [ 4 ] , Huebscher and McCann have proposed an adaptive middleware design for context-aware applications .\nTheir adaptive middleware uses utility functions to choose the best context provider ( given the QoC requirements of applications and the QoC of alternative means of context acquisition ) .\nIn our negotiation model , the calculation of utility function Uc was inspired by this approach .\nHenricksen and Indulska propose an approach to modelling and using imperfect information in [ 3 ] .\nThey characterize various types and sources of imperfect context information and present a set of novel context modelling constructs .\nThey also outline a software infrastructure that supports the management and use of imperfect context information .\nJudd and Steenkiste in [ 5 ] describe a generic interface to query context services allowing clients to specify their quality requirements as bounds on accuracy , confidence , update time and sample interval .\nIn [ 6 ] , Lei et al. present a context service which accepts freshness and confidence meta-data from context sources , and passes this along to clients so that they can adjust their level of trust accordingly .\n[ 10 ] presents a framework for realizing dynamic context consistency management .\nThe framework supports inconsistency detection based on a semantic matching and inconsistency triggering model , and inconsistency resolution with proactive actions to context sources .\nMost approaches to provide appropriate context utilize a centralized `` arbitrator '' .\nIn our approach , we let distributed context providers themselves decide who can provide appropriate context information .\nOur approach can reduce the burden of the middleware , because we do not need the middleware to provide a context selection mechanism .\nIt can avoid the serious consequences caused by a breakdown of the `` arbitrator '' .\nAlso , it can guarantee context providers ' interests .\n6 .\nCONCLUSION AND FUTURE WORK\nHow to provide the appropriate context information is a challenging problem in pervasive computing .\nIn this paper , we have presented a novel approach based on negotiation with rewards to attempt to solve such problem .\nDistributed context providers negotiate with each other to reach agreement on the issues who can provide the appropriate context and how they allocate the proceeds .\nThe results of our experiments have showed that our approach can choose an appropriate context provider , and also can guarantee providers ' interests by a relatively fair proceeds allocation .\nIn this paper , we only consider how to choose an appropriate context provider from two providers .\nIn the future work , this negotiation model will be extended , and more than two context providers can negotiate with each other to decide who is the most appropriate context provider .\nIn the extended negotiation model , how to design efficient negotiation strategies will be a challenging problem .\nWe assume that the context provider will fulfill its promise of reward in the next negotiation process .\nIn fact , the context provider might deceive its opponent and provide illusive promise .\nWe should solve this problem in the future .\nWe also should deal with interactions which are interrupted by failing communication links in the future work .", "lvl-4": "Rewards-Based Negotiation for Providing Context Information\nABSTRACT\nHow to provide appropriate context information is a challenging problem in context-aware computing .\nMost existing approaches use a centralized selection mechanism to decide which context information is appropriate .\nIn this paper , we propose a novel approach based on negotiation with rewards to solving such problem .\nDistributed context providers negotiate with each other to decide who can provide context and how they allocate proceeds .\nIn order to support our approach , we have designed a concrete negotiation model with rewards .\nWe also evaluate our approach and show that it indeed can choose an appropriate context provider and allocate the proceeds fairly .\n1 .\nINTRODUCTION\nContext-awareness is a key concept in pervasive computing .\nContext informs both recognition and mapping by providing a structured , unified view of the world in which the system operates [ 1 ] .\nContext-aware applications exploit context information , such as location , preferences of users and so on , to adapt their behaviors in response to changing requirements of users and pervasive environments .\nHowever , one specific kind of context can often be provided by different context providers ( sensors or other data sources of context information ) with different quality levels .\nFor example ,\nBecause context-aware applications utilize context information to adapt their behaviors , inappropriate context information may lead to inappropriate behavior .\nThus we should design a mechanism to provide appropriate context information for current context-aware applications .\nIn pervasive environments , context providers considered as relatively independent entities , have their own interests .\nThey hope to get proceeds when they provide context information .\nHowever , most existing approaches consider context providers as entities without any personal interests , and use a centralized `` arbitrator '' provided by the middleware to decide who can provide appropriate context .\nThus the burden of the middleware is very heavy , and its decision may be unfair and harm some providers ' interests .\nMoreover , when such `` arbitrator '' is broken down , it will cause serious consequences for context-aware applications .\nIn this paper , we let distributed context providers themselves decide who provide context information .\nSince high reputation could help providers get more opportunities to provide context and get more proceeds in the future , providers try to get the right to provide `` good '' context to enhance their reputation .\nIn order to get such right , context providers may agree to share some portion of the proceeds with its opponents .\nThus context providers negotiate with each other to reach agreement on the issues who can provide context and how they allocate the proceeds .\nOur approach has some specific advantages :\n1 .\nWe do not need an `` arbitrator '' provided by the middleware of pervasive computing to decide who provides context .\nThus it will reduce the burden of the middleware .\n2 .\nIt is more reasonable that distributed context providers decide who provide context , because it can avoid the serious consequences caused by a breakdown of a centralized `` arbitrator '' .\n3 .\nIt can guarantee providers ' interests and provide fair proceeds allocation when providers negotiate with each other to reach agreement on their concerned problems .\n4 .\nThis approach can choose an appropriate provider au\ntomatically .\nThe negotiation model we have designed to support our approach is also a novel model in negotiation domain .\nThis model can help negotiators reach agreement in the present negotiation process by providing some guarantees over the outcome of next negotiation process ( i.e. rewards ) .\nIt will cost more time to reach agreement .\nIt also expands the negotiation space considered in present negotiation process , and therefore provides more possibilities to find better agreement .\nSection 2 presents some assumptions .\nSection 3 describes our approach based on negotiation detailedly , including utility functions , negotiation protocol and context providers ' strategies .\nSection 4 evaluates our approach .\nIn section 5 we introduce some related work and conclude in section 6 .\n5 .\nRELATED WORK\nIn [ 4 ] , Huebscher and McCann have proposed an adaptive middleware design for context-aware applications .\nTheir adaptive middleware uses utility functions to choose the best context provider ( given the QoC requirements of applications and the QoC of alternative means of context acquisition ) .\nIn our negotiation model , the calculation of utility function Uc was inspired by this approach .\nHenricksen and Indulska propose an approach to modelling and using imperfect information in [ 3 ] .\nThey characterize various types and sources of imperfect context information and present a set of novel context modelling constructs .\nThey also outline a software infrastructure that supports the management and use of imperfect context information .\n[ 10 ] presents a framework for realizing dynamic context consistency management .\nThe framework supports inconsistency detection based on a semantic matching and inconsistency triggering model , and inconsistency resolution with proactive actions to context sources .\nMost approaches to provide appropriate context utilize a centralized `` arbitrator '' .\nIn our approach , we let distributed context providers themselves decide who can provide appropriate context information .\nOur approach can reduce the burden of the middleware , because we do not need the middleware to provide a context selection mechanism .\nAlso , it can guarantee context providers ' interests .\n6 .\nCONCLUSION AND FUTURE WORK\nHow to provide the appropriate context information is a challenging problem in pervasive computing .\nIn this paper , we have presented a novel approach based on negotiation with rewards to attempt to solve such problem .\nDistributed context providers negotiate with each other to reach agreement on the issues who can provide the appropriate context and how they allocate the proceeds .\nThe results of our experiments have showed that our approach can choose an appropriate context provider , and also can guarantee providers ' interests by a relatively fair proceeds allocation .\nIn this paper , we only consider how to choose an appropriate context provider from two providers .\nIn the future work , this negotiation model will be extended , and more than two context providers can negotiate with each other to decide who is the most appropriate context provider .\nIn the extended negotiation model , how to design efficient negotiation strategies will be a challenging problem .\nWe assume that the context provider will fulfill its promise of reward in the next negotiation process .\nIn fact , the context provider might deceive its opponent and provide illusive promise .\nWe should solve this problem in the future .", "lvl-2": "Rewards-Based Negotiation for Providing Context Information\nABSTRACT\nHow to provide appropriate context information is a challenging problem in context-aware computing .\nMost existing approaches use a centralized selection mechanism to decide which context information is appropriate .\nIn this paper , we propose a novel approach based on negotiation with rewards to solving such problem .\nDistributed context providers negotiate with each other to decide who can provide context and how they allocate proceeds .\nIn order to support our approach , we have designed a concrete negotiation model with rewards .\nWe also evaluate our approach and show that it indeed can choose an appropriate context provider and allocate the proceeds fairly .\n1 .\nINTRODUCTION\nContext-awareness is a key concept in pervasive computing .\nContext informs both recognition and mapping by providing a structured , unified view of the world in which the system operates [ 1 ] .\nContext-aware applications exploit context information , such as location , preferences of users and so on , to adapt their behaviors in response to changing requirements of users and pervasive environments .\nHowever , one specific kind of context can often be provided by different context providers ( sensors or other data sources of context information ) with different quality levels .\nFor example ,\nin a smart home , thermometer A 's measurement precision is 0.1 \u00b0 C , and thermometer B 's measurement precision is 0.5 \u00b0 C. Thus A could provide more precise context information about temperature than B. Moreover , sometimes different context providers may provide conflictive context information .\nFor example , different sensors report that the same person is in different places at the same time .\nBecause context-aware applications utilize context information to adapt their behaviors , inappropriate context information may lead to inappropriate behavior .\nThus we should design a mechanism to provide appropriate context information for current context-aware applications .\nIn pervasive environments , context providers considered as relatively independent entities , have their own interests .\nThey hope to get proceeds when they provide context information .\nHowever , most existing approaches consider context providers as entities without any personal interests , and use a centralized `` arbitrator '' provided by the middleware to decide who can provide appropriate context .\nThus the burden of the middleware is very heavy , and its decision may be unfair and harm some providers ' interests .\nMoreover , when such `` arbitrator '' is broken down , it will cause serious consequences for context-aware applications .\nIn this paper , we let distributed context providers themselves decide who provide context information .\nSince high reputation could help providers get more opportunities to provide context and get more proceeds in the future , providers try to get the right to provide `` good '' context to enhance their reputation .\nIn order to get such right , context providers may agree to share some portion of the proceeds with its opponents .\nThus context providers negotiate with each other to reach agreement on the issues who can provide context and how they allocate the proceeds .\nOur approach has some specific advantages :\n1 .\nWe do not need an `` arbitrator '' provided by the middleware of pervasive computing to decide who provides context .\nThus it will reduce the burden of the middleware .\n2 .\nIt is more reasonable that distributed context providers decide who provide context , because it can avoid the serious consequences caused by a breakdown of a centralized `` arbitrator '' .\n3 .\nIt can guarantee providers ' interests and provide fair proceeds allocation when providers negotiate with each other to reach agreement on their concerned problems .\n4 .\nThis approach can choose an appropriate provider au\ntomatically .\nIt does not need any applications and users ' intervention .\nThe negotiation model we have designed to support our approach is also a novel model in negotiation domain .\nThis model can help negotiators reach agreement in the present negotiation process by providing some guarantees over the outcome of next negotiation process ( i.e. rewards ) .\nNegotiator may find current offer and reward worth more than counter-offer which will delay the agreement , and accepts current offer and reward .\nWithout the reward , it may find current offer worth less than the counter-offer , and proposes its counter-offer .\nIt will cost more time to reach agreement .\nIt also expands the negotiation space considered in present negotiation process , and therefore provides more possibilities to find better agreement .\nThe remainder of this paper is organized as follows .\nSection 2 presents some assumptions .\nSection 3 describes our approach based on negotiation detailedly , including utility functions , negotiation protocol and context providers ' strategies .\nSection 4 evaluates our approach .\nIn section 5 we introduce some related work and conclude in section 6 .\n2 .\nSOME ASSUMPTIONS\nBefore introducing our approach , we would like to give some assumptions :\n1 .\nAll context providers are well-meaning and honest .\nDuring the negotiation process , they exchange information honestly .\nRewards confirmed in this negotiation process will be fulfilled in the next negotiation process .\n2 .\nAll providers must guarantee the system 's interests .\nThey should provide appropriate context information for current applications .\nAfter guaranteeing the system 's interest , they can try to maximize their own personal interests .\nThe assumption is reasonable , because when an inappropriate context provider gets the right to provide `` bad '' context , as a punishment , its reputation will decrease , and the proceeds is also very small .\n3 .\nAs context providers are independent , factors which influence their negotiation stance and behavior are private and not available to their opponents .\nTheir utility functions are also private .\n4 .\nSince the negotiation takes place in pervasive environ\nments , time is a critical factors .\nThe current application often hopes to get context information as quickly as possible , so the time cost to reach agreement should be as short as possible .\nContext providers often have strict deadline by when the negotiation must be completed .\nAfter presenting these assumptions , we will propose our approach based on negotiation with rewards in the next section .\n3 .\nOUR APPROACH\nIn the beginning , we introduce the concepts of reputation and Quality of Context ( QoC ) attributes .\nBoth will be used in our approach .\nReputation of an agent is a perception regarding its behavior norms , which is held by other agents , based on experiences and observation of its past actions [ 7 ] .\nHere agent means context provider .\nEach provider 's reputation indicates its historical ability to provide appropriate context information .\nQuality of Context ( QoC ) attributes characterize the quality of context information .\nWhen applications require context information , they should specify their QoC requirements which express constraints of QoC attributes .\nContext providers can specify QoC attributes for the context information they deliver .\nAlthough we can decide who provides appropriate context according to QoC requirements and context providers ' QoC information , applications ' QoC requirements might not reflect the actual quality requirements .\nThus , in addition to QoC , reputation information of context providers is another factor affecting the decision who can provide context information .\nNegotiation is a process by which a joint decision is made by two or more parties .\nThe parties first verbalize contradictory demands and then move towards agreement by a process of concession making or search for new alternatives [ 2 ] .\nIn pervasive environments , all available context providers negotiate with each other to decide who can provide context information .\nThis process will be repeated because a kind of context is needed more than one time .\nNegotiation using persuasive arguments ( such as threats , promises of future rewards , and appeals ) allows negotiation parties to influence each others ' preferences to reach better deals effectively and efficiently [ 9 ] .\nThis pervasive negotiation is effective in repeated interaction because arguments can be constructed to directly impact future encounters .\nIn this paper , for simplicity , we let negotiation take place between two providers .\nWe extend Raiffa 's basic model for bilateral negotiation [ 8 ] , and allow negotiators to negotiate with each other by exchanging arguments in the form of promises of future rewards or requests for future rewards .\nRewards mean some extra proceeds in the next negotiation process .\nThey can influence outcomes of current and future negotiation .\nIn our approach , as described by Figure 1 , the current application requires Context Manager to provide a specific type of context information satisfying QoC requirements .\nContext Manager finds that provider A and B can provide such kind of context with different quality levels .\nThen the manager tells A and B to negotiate to reach agreement on who can provide the context information and how they will allocate the proceeds .\nBoth providers get reputation information from the database Reputation of Context Providers and QoC requirements , and then negotiate with each other according to our negotiation model .\nWhen negotiation is completed , the chosen provider will provide the context information to Context Manager , and then Context Manager delivers such information to the application and also stores it in Context Knowledge Base where current and historical context information is stored .\nThe current application gives the feedback information about the provided context , and then Context Manager will update the chosen provider 's reputation information according to the feedback information .\nContext Manager also provides the proceeds to providers according to the feedback information and the time cost on negotiation .\nIn the following parts of this section , we describe our negotiation model in detail , including context providers ' utility functions to evaluate offers and rewards , negotiation protocol , and strategies to generate offers and rewards .\nFigure 1 : Negotiate to provide appropriate context information .\nsion in metres with refresh rate in Hz , thus a standard scale for all dimension is needed .\nThe scaling factors for the QoC attributes are s ~ = ( s1 , s2 ... sn ) .\nIn addition , different QoC attributes may have different weights : w ~ = ( w1 , w2 ... wn ) .\nThen d ~ = ( d1 , d2 ... dn )\nwhere cpi \u2212 ai = 0 for ai = \u00af and cpi \u2212 ai = o ( ai ) for cpi = \u00af ( o ( . )\ndetermines the application 's satisfaction or dissatisfaction when c is unable to provide an estimate of a QoC attribute , given the value wished for by the application ) .\nThe distance can be linear distance ( 1-norm ) , Euclidean distance ( 2-norm ) , or the maximum distance ( max-norm ) :\n3.1 Utility function\nDuring the negotiation process , one provider proposes an offer and a reward to the other provider .\nAn offer is noted as o = ( c , p ) : c indicates the chosen context provider and its domain is Dc ( i.e. the two context providers participating in the negotiation ) ; p means the proposer 's portion of the proceeds , and its domain is Dp = [ 0,1 ] .\nIts opponent 's portion of the proceeds is 1 \u2212 p .\nThe reward ep 's domain is Dep = [ -1,1 ] , and | ep | means the extra portion of proceeds the proposer promises to provide or requests in the next negotiation process .\nep < 0 means the proposer promises to provide reward , ep > 0 means the proposer requests reward and ep = 0 means no reward .\nThe opponent evaluates the offer and reward to decide to accept them or propose a counter-offer and a reward .\nThus context providers should have utility functions to evaluate offers and rewards .\nTime is a critical factor , and only at times in the set T = { 0 , 1 , 2 , ... tdeadline } , context providers can propose their offers .\nThe set O include all available offers .\nContext provider A 's utility function of the offer and reward at time t UA : O \u00d7 Dep \u00d7 T \u2192 [ \u2212 1 , 1 ] is defined as :\nSimilarly , the utility function of A 's opponent ( i.e. B ) can be defined as :\nIn ( 1 ) , wA1 , wA2 and wA3 are weights given to c , p and ep respectively , and wA1 + wA2 + wA3 = 1 .\nUsually , the context provider pays the most attention to the system 's interests , pays the least attention to the reward , thus wA1 > wA2 > wA3 .\nUcA : Dc \u2192 [ \u2212 1 , 1 ] is the utility function of the issue who provides context .\nThis function is determined by two factors : the distance between c 's QoC and current application 's QoC requirements , and c 's reputation .\nThe two negotiators acquire c 's QoC information from c , and we use the approach proposed in [ 4 ] to calculate the distance between c 's QoC and the application 's Qoc requirements .\nThe required context has n QoC attributes and let the application 's wishes for this context be a ~ = ( a1 , a2 ... an ) ( where ai = \u00af means the application 's indifference to the i-th QoC attribute ) , c 's QoC attributes ~ cp = ( cp1 , cp2 ... cpn ) ( where cpi = \u00af means c 's inability to provide a quantitative value for the i-th QoC attribute ) .\nBecause numerical distance values of different properties are combined , e.g. location preci | | ~ d | | \u221e = max { | d1 | , | d2 | ... | dn | } ( max \u2212 norm ) The detail description of this calculation can be found in [ 4 ] .\nReputation of c can be acquired from the database Reputation of Context Providers .\nUcA ( c ) : R \u00d7 Drep \u2192 [ \u2212 1 , 1 ] can be defined as :\nwAc1 and wAc2 are weights given to the distance and reputation respectively , and wAc1 + wAc2 = 1 .\nDrep is the domain of reputation information .\nUdA : R \u2192 [ 0 , 1 ] is a monotonedecreasing function and UA rep : Drep \u2192 [ \u2212 1 , 1 ] is a monotoneincreasing function .\nUpA : Dp \u2192 [ 0 , 1 ] is the utility function of the portion of proceeds A will receive and it is also a monotone-increasing function .\nA 's utility function of reward ep UA ep : Dep \u2192 [ \u2212 1 , 1 ] is also a monotone-increasing function and UAep ( 0 ) = 0 .\n\u03b4A : T \u2192 [ 0 , 1 ] is the time discount function .\nIt is also a monotone-decreasing function .\nWhen time t cost on negotiation increases , \u03b4A ( t ) will decrease , and the utility will also decrease .\nThus both negotiators want to reach agreement as quickly as possible to avoid loss of utility .\n3.2 Negotiation protocol\nWhen provider A and B have got QoC requirements and reputation information , they begin to negotiate .\nThey first set their reserved ( the lowest acceptable ) utility which can guarantee the system 's interests and their personal interests .\nWhen the context provider finds the utility of an offer and a reward is lower than its reserved utility , it will reject this proposal and terminate the negotiation process .\nThe provider who starts the negotiation is chosen randomly .\nWe assume A starts the negotiation , and it proposes offer o and reward ep to B according to its strategy ( see subsection 3.3 ) .\nWhen B receives the proposal from A , it uses its utility function to evaluate it .\nIf it is lower than its reserved utility , the provider terminates the negotiation .\nOtherwise , if\ni.e. the utility of o and ep proposed by A at time t is greater than the utility of offer o ' and reward ep ' which B will propose to A at time t + 1 , B will accept this offer and reward .\nThe negotiation is completed .\nHowever , if\nthen B will reject A 's proposal , and propose its counter-offer and reward to A .\nWhen A receives B 's counter-offer and reward , A evaluates them using its utility function , and compares the utility with the utility of offer and reward it wants to propose to B at time t +2 , decides to accept it or give its counter-offer and reward .\nThis negotiation process continues and in each negotiation round , context providers concede in order to reach agreement .\nThe negotiation will be successfully finished when agreement is reached , or be terminated forcibly due to deadline or the utility lower than reserved utility .\nWhen negotiation is forced to be terminated , Context manager will ask A and B to calculate UcA ( A ) , UcA ( B ) , UcB ( A ) and UcB ( B ) respectively .\nIf\nContext Manager will select a provider from A and B randomly .\nIn addition , Context Manager allocates the proceeds between the two providers .\nAlthough we can select one provider when negotiation is terminated forcibly , however , this may lead to the unfair allocation of the proceeds .\nMoreover , more time negotiators cost on negotiation , less proceeds will be given .\nThus negotiators will try to reach agreement as soon as possible in order to avoid unnecessary loss .\nWhen the negotiation is finished , the chosen provider provides the context information to Context Manager which will deliver the information to current application .\nAccording to the application 's feedback information about this context , Context Manager updates the provider 's reputation stored in Reputation of Context Providers .\nThe provider 's reputation may be enhanced or decreased .\nIn addition , according to the feedback and the negotiation time , Context Manager will give proceeds to the provider .\nThen the provider will share the proceeds with its opponent according to the negotiation outcome and the reward confirmed in the last negotiation process .\nFor example , in the last negotiation process A promised to give reward ep ( 0 \u2264 ep < 1 ) to B , and A 's portion of the proceeds is p in current negotiation .\nThen A 's actual portion of the proceeds is p \u00b7 ( 1 \u2212 ep ) , and its opponent B 's portion of the proceeds is 1 \u2212 p + p \u00b7 ep .\n3.3 Negotiation strategy\nThe context provider might want to pursue the right to provide context information blindly in order to enhance its reputation .\nHowever when it finally provides `` bad '' context information , its reputation will be decreased and the proceeds is also very small .\nThus the context provider should take action according to its strategy .\nThe aim of provider 's negotiation strategy is to determine the best course of action which will result in a negotiation outcome maximizing its utility function ( i.e how to generate an offer and a reward ) .\nIn our negotiation model , the context provider generates its offer and reward according to its pervious offer and reward and the last one sent by its opponent .\nAt the beginning of the negotiation , context providers initialize their offers and rewards according to their beliefs and their reserved utility .\nIf context provider A considers that it can provide `` good '' context and wants to enhance reputation , then it will propose that A provides the context information , shares some proceeds with its opponent B , and even promises to give reward .\nHowever , if A considers that it may provide `` bad '' context , A will propose that its opponent B provide the context , and require B to share some proceeds and provide reward .\nDuring the negotiation process , we assume that at time t A proposes offer ot and reward ept to B , at time t + 1 , B proposes counter-offer ot +1 and reward ept +1 to A .\nThen at time t + 2 , when the utility of B 's proposal is greater than A 's reserved utility , A gives its response .\nNow we calculate the expected utility to be conceded at time t +2 , we use Cu to express the conceded utility .\ncept B 's proposal ) where cA : T \u2192 [ 0 , 1 ] is a monotoneincreasing function .\ncA ( t ) indicates A 's utility concession rate1 .\nA concedes a little in the beginning before conceding significantly towards the deadline .\nThen A generates its offer ot +2 = ( ct +2 , pt +2 ) and reward ept +2 at time t + 2 .\nThe expected utility of A at time t + 2 is :\nthen A will accept B 's proposal ( i.e. ot +1 and ept +1 ) .\nOtherwise , A will propose its counter-offer and reward based on Cu .\nWe assume that Cu is distributed evenly on c , p and ep ( i.e. the utility to be conceded on c , p and ep is 31 Cu respectively ) .\nIf\ni.e. the expected utility of c at time t +2 is UcA ( ct ) \u2212 \u03b4a ( t +2 ) and it is closer to the utility of A 's proposal ct at time t , then at time t + 2 , ct +2 = ct , else the utility is closer to B'proposal ct +1 and ct +2 = ct +1 .\nWhen ct +2 is equal to ct , the actual conceded utility of c is 0 , and the total concession of p and ep is Cu .\nWe divide the total concession of p and ep evenly , and get the conceded utility of p and ep respectively .\nWe calculate pt +2 and ept +2 as follows :\nNow , we have generated the offer and reward A will propose at time t + 2 .\nSimilarly , B also can generate its offer and reward .\nTable 1 : Utility functions and weights of c , p and ep for each provider\n4 .\nEVALUATION\nIn this section , we evaluate the effectiveness of our approach by simulated experiments .\nContext providers A and B negotiate to reach agreement .\nThey get QoC requirements and calculate the distance between Qoc requirements and their QoC .\nFor simplicity , in our experiments , we assume that the distance has been calculated , and dA represents distance between QoC requirements and A 's QoC , dB represents distance between QoC requirements and B 's QoC .\nThe domain of dA and dB is [ 0,500 ] .\nWe assume reputation value is a real number and its domain is [ -1000 , 1000 ] , repA represents A 's reputation value and repB represents B 's reputation value .\nWe assume that both providers pay the most attention to the system 's interests , and pay the least attention to the reward , thus w1 > w2 > w3 , and the weight of Ud approximates the weight of Urep .\nA and B 's utility functions and weights of c , p and ep are defined in Table 1 .\nWe set deadline tdeadline = 100 , and define time discount function \u03b4 ( t ) and concession rate function c ( t ) of A and B as follows :\ntdeadline ) 1 Given different values of dA , dB , repA and repB , A and B negotiate to reach agreement .\nThe provider that starts the negotiation is chosen at random .\nWe hope that when dA `` dB and repA '' repB , A will get the right to provide context and get a major portion of the proceeds , and when \u0394d = dA -- dB is in a small range ( e.g. [ -50,50 ] ) and \u0394rep = repA -- repB is in a small range ( e.g. [ -50,50 ] ) , A and B will get approximately equal opportunities to provide context , and allocate the proceeds evenly .\nWhen dA \u2212 dB 500 approximates to dA \u2212 dB 1000 ( i.e. the two providers ' abilities to provide context information are approximately equal ) , we also hope that A and B get equal opportunities to provide context and allocate the proceeds evenly .\nAccording to the three situations above , we make three experiments as follows : Experiment 1 : In this experiment , A and B negotiate with each other for 50 times , and at each time , we assign different values to dA , dB , repA , repB ( satisfying dA `` dB and repA '' repB ) and the reserved utilities of A and B .\nWhen the experiment is completed , we find 3 negotiation games are terminated due to the utility lower than the reserved utility .\nA gets the right to provide context for 47 times .\nThe average portion of proceeds A get is about 0.683 , and B 's average portion of proceeds is 0.317 .\nThe average time cost to reach agreement is 8.4 .\nWe also find that when B asks A to provide context in its first offer , B can require and get more portion of the proceeds because of its goodwill .\nExperiment 2 : A and B also negotiate with each other for 50 times in this experiment given different values of dA , dB , repA , repB ( satisfying -- 50 < _ \u0394d = dA -- dB < _ 50 and -- 50 < _ \u0394rep = drep -- drep < _ 50 ) and the reserved utilities of A and B .\nAfter the experiment , we find that there are 8 negotiation games terminated due to the utility lower than the reserved utility .\nA and B get the right to provide context for 20 times and 22 times respectively .\nThe average portion of proceeds A get is 0.528 and B 's average portion of the proceeds is 0.472 .\nThe average time cost on negotiation is 10.5 .\nExperiment 3 : In this experiment , A and B also negotiate with each other for 50 times given dA , dB , repA , repB ( satisfying -- 0.2 < _ dA \u2212 dB 500 -- dA \u2212 dB 1000 < _ 0.2 ) and the reserved utilities of A and B .\nThere are 6 negotiation games terminated forcibly .\nA and B get the right to provide context for 21 times and 23 times respectively .\nThe average portion of proceeds A get is 0.481 and B 's average portion of the proceeds is 0.519 .\nThe average time cost on negotiation is 9.2 .\nOne thing should be mentioned is that except for d , rep , p and ep , other factors ( e.g. weights , time discount function \u03b4 ( t ) and concession rate function c ( t ) ) could also affect the negotiation outcome .\nThese factors should be adjusted according to providers ' beliefs at the beginning of each negotiation process .\nIn our experiments , for similarity , we assign values to them without any particularity in advance .\nThese experiments ' results prove that our approach can choose an appropriate context provider and can provide a relatively fair proceeds allocation .\nWhen one provider is obviously more appropriate than the other provider , the provider will get the right to provide context and get a major portion of the proceeds .\nWhen both providers have the approximately same abilities to provide context , their opportunities to provide context are equal and they can get about a half portion of the proceeds respectively .\n5 .\nRELATED WORK\nIn [ 4 ] , Huebscher and McCann have proposed an adaptive middleware design for context-aware applications .\nTheir adaptive middleware uses utility functions to choose the best context provider ( given the QoC requirements of applications and the QoC of alternative means of context acquisition ) .\nIn our negotiation model , the calculation of utility function Uc was inspired by this approach .\nHenricksen and Indulska propose an approach to modelling and using imperfect information in [ 3 ] .\nThey characterize various types and sources of imperfect context information and present a set of novel context modelling constructs .\nThey also outline a software infrastructure that supports the management and use of imperfect context information .\nJudd and Steenkiste in [ 5 ] describe a generic interface to query context services allowing clients to specify their quality requirements as bounds on accuracy , confidence , update time and sample interval .\nIn [ 6 ] , Lei et al. present a context service which accepts freshness and confidence meta-data from context sources , and passes this along to clients so that they can adjust their level of trust accordingly .\n[ 10 ] presents a framework for realizing dynamic context consistency management .\nThe framework supports inconsistency detection based on a semantic matching and inconsistency triggering model , and inconsistency resolution with proactive actions to context sources .\nMost approaches to provide appropriate context utilize a centralized `` arbitrator '' .\nIn our approach , we let distributed context providers themselves decide who can provide appropriate context information .\nOur approach can reduce the burden of the middleware , because we do not need the middleware to provide a context selection mechanism .\nIt can avoid the serious consequences caused by a breakdown of the `` arbitrator '' .\nAlso , it can guarantee context providers ' interests .\n6 .\nCONCLUSION AND FUTURE WORK\nHow to provide the appropriate context information is a challenging problem in pervasive computing .\nIn this paper , we have presented a novel approach based on negotiation with rewards to attempt to solve such problem .\nDistributed context providers negotiate with each other to reach agreement on the issues who can provide the appropriate context and how they allocate the proceeds .\nThe results of our experiments have showed that our approach can choose an appropriate context provider , and also can guarantee providers ' interests by a relatively fair proceeds allocation .\nIn this paper , we only consider how to choose an appropriate context provider from two providers .\nIn the future work , this negotiation model will be extended , and more than two context providers can negotiate with each other to decide who is the most appropriate context provider .\nIn the extended negotiation model , how to design efficient negotiation strategies will be a challenging problem .\nWe assume that the context provider will fulfill its promise of reward in the next negotiation process .\nIn fact , the context provider might deceive its opponent and provide illusive promise .\nWe should solve this problem in the future .\nWe also should deal with interactions which are interrupted by failing communication links in the future work ."} {"id": "J-17", "title": "", "abstract": "", "keyphrases": ["mechan design", "approxim algorithm", "schedul", "multi-dimension schedul", "cycl monoton", "makespan minim", "algorithm", "random mechan", "fraction mechan us", "truth mechan design", "fraction domain", "schedul"], "prmu": [], "lvl-1": "Truthful Mechanism Design for Multi-Dimensional Scheduling via Cycle Monotonicity Ron Lavi Industrial Engineering and Management The Technion - Israel Institute of Technology ronlavi@ie.technion.ac.il Chaitanya Swamy Combinatorics and Optimization University of Waterloo cswamy@math.uwaterloo.ca ABSTRACT We consider the problem of makespan minimization on m unrelated machines in the context of algorithmic mechanism design, where the machines are the strategic players.\nThis is a multidimensional scheduling domain, and the only known positive results for makespan minimization in such a domain are O(m)-approximation truthful mechanisms [22, 20].\nWe study a well-motivated special case of this problem, where the processing time of a job on each machine may either be low or high, and the low and high values are public and job-dependent.\nThis preserves the multidimensionality of the domain, and generalizes the restricted-machines (i.e., {pj, \u221e}) setting in scheduling.\nWe give a general technique to convert any c-approximation algorithm to a 3capproximation truthful-in-expectation mechanism.\nThis is one of the few known results that shows how to export approximation algorithms for a multidimensional problem into truthful mechanisms in a black-box fashion.\nWhen the low and high values are the same for all jobs, we devise a deterministic 2-approximation truthful mechanism.\nThese are the first truthful mechanisms with non-trivial performance guarantees for a multidimensional scheduling domain.\nOur constructions are novel in two respects.\nFirst, we do not utilize or rely on explicit price definitions to prove truthfulness; instead we design algorithms that satisfy cycle monotonicity.\nCycle monotonicity [23] is a necessary and sufficient condition for truthfulness, is a generalization of value monotonicity for multidimensional domains.\nHowever, whereas value monotonicity has been used extensively and successfully to design truthful mechanisms in singledimensional domains, ours is the first work that leverages cycle monotonicity in the multidimensional setting.\nSecond, our randomized mechanisms are obtained by first constructing a fractional truthful mechanism for a fractional relaxation of the problem, and then converting it into a truthfulin-expectation mechanism.\nThis builds upon a technique of [16], and shows the usefulness of fractional mechanisms in truthful mechanism design.\nCategories and Subject Descriptors F.2 [Analysis of Algorithms and Problem Complexity]; J.4 [Social and Behavioral Sciences]: Economics General Terms Algorithms, Economics, Theory 1.\nINTRODUCTION Mechanism design studies algorithmic constructions under the presence of strategic players who hold the inputs to the algorithm.\nAlgorithmic mechanism design has focused mainly on settings were the social planner or designer wishes to maximize the social welfare (or equivalently, minimize social cost), or on auction settings where revenuemaximization is the main goal.\nAlternative optimization goals, such as those that incorporate fairness criteria (which have been investigated algorithmically and in social choice theory), have received very little or no attention.\nIn this paper, we consider such an alternative goal in the context of machine scheduling, namely, makespan minimization.\nThere are n jobs or tasks that need to be assigned to m machines, where each job has to be assigned to exactly one machine.\nAssigning a job j to a machine i incurs a load (cost) of pij \u2265 0 on machine i, and the load of a machine is the sum of the loads incurred due to the jobs assigned to it; the goal is to schedule the jobs so as to minimize the maximum load of a machine, which is termed the makespan of the schedule.\nMakespan minimization is a common objective in scheduling environments, and has been well studied algorithmically in both the Computer Science and Operations Research communities (see, e.g., the survey [12]).\nFollowing the work of Nisan and Ronen [22], we consider each machine to be a strategic player or agent who privately knows its own processing time for each job, and may misrepresent these values in order to decrease its load (which is its incurred cost).\nHence, we approach the problem via mechanism design: the social designer, who holds the set of jobs to be assigned, needs to specify, in addition to a schedule, suitable payments to the players in order to incentivize them to reveal their true processing times.\nSuch a mechanism is called a truthful mechanism.\nThe makespan-minimization objective is quite different from the classic goal of social-welfare maximization, where one wants to maximize the total welfare (or minimize the total cost) of all players.\nInstead, it 252 corresponds to maximizing the minimum welfare and the notion of max-min fairness, and appears to be a much harder problem from the viewpoint of mechanism design.\nIn particular, the celebrated VCG [26, 9, 10] family of mechanisms does not apply here, and we need to devise new techniques.\nThe possibility of constructing a truthful mechanism for makespan minimization is strongly related to assumptions on the players'' processing times, in particular, the dimensionality of the domain.\nNisan and Ronen considered the setting of unrelated machines where the pij values may be arbitrary.\nThis is a multidimensional domain, since a player``s private value is its entire vector of processing times (pij)j. Very few positive results are known for multidimensional domains in general, and the only positive results known for multidimensional scheduling are O(m)-approximation truthful mechanisms [22, 20].\nWe emphasize that regardless of computational considerations, even the existence of a truthful mechanism with a significantly better (than m) approximation ratio is not known for any such scheduling domain.\nOn the negative side, [22] showed that no truthful deterministic mechanism can achieve approximation ratio better than 2, and strengthened this lower bound to m for two specific classes of deterministic mechanisms.\nRecently, [20] extended this lower bound to randomized mechanisms, and [8] improved the deterministic lower bound.\nIn stark contrast with the above state of affairs, much stronger (and many more) positive results are known for a special case of the unrelated machines problem, namely, the setting of related machines.\nHere, we have pij = pj/si for every i, j, where pj is public knowledge, and the speed si is the only private parameter of machine i.\nThis assumption makes the domain of players'' types single-dimensional.\nTruthfulness in such domains is equivalent to a convenient value-monotonicity condition [21, 3], which appears to make it significantly easier to design truthful mechanisms in such domains.\nArcher and Tardos [3] first considered the related machines setting and gave a randomized 3-approximation truthful-in-expectation mechanism.\nThe gap between the single-dimensional and multidimensional domains is perhaps best exemplified by the fact that [3] showed that there exists a truthful mechanism that always outputs an optimal schedule.\n(Recall that in the multidimensional unrelated machines setting, it is impossible to obtain a truthful mechanism with approximation ratio better than 2.)\nVarious follow-up results [2, 4, 1, 13] have strengthened the notion of truthfulness and/or improved the approximation ratio.\nSuch difficulties in moving from the single-dimensional to the multidimensional setting also arise in other mechanism design settings (e.g., combinatorial auctions).\nThus, in addition to the specific importance of scheduling in strategic environments, ideas from multidimensional scheduling may also have a bearing in the more general context of truthful mechanism design for multidimensional domains.\nIn this paper, we consider the makespan-minimization problem for a special case of unrelated machines, where the processing time of a job is either low or high on each machine.\nMore precisely, in our setting, pij \u2208 {Lj, Hj} for every i, j, where the Lj, Hj values are publicly known (Lj \u2261low, Hj \u2261high).\nWe call this model the jobdependent two-values case.\nThis model generalizes the classic restricted machines setting, where pij \u2208 {Lj, \u221e} which has been well-studied algorithmically.\nA special case of our model is when Lj = L and Hj = H for all jobs j, which we denote simply as the two-values scheduling model.\nBoth of our domains are multidimensional, since the machines are unrelated: one job may be low on one machine and high on the other, while another job may follow the opposite pattern.\nThus, the private information of each machine is a vector specifying which jobs are low and high on it.\nThus, they retain the core property underlying the hardness of truthful mechanism design for unrelated machines, and by studying these special settings we hope to gain some insights that will be useful for tackling the general problem.\nOur Results and Techniques We present various positive results for our multidimensional scheduling domains.\nOur first result is a general method to convert any capproximation algorithm for the job-dependent two values setting into a 3c-approximation truthful-in-expectation mechanism.\nThis is one of the very few known results that use an approximation algorithm in a black-box fashion to obtain a truthful mechanism for a multidimensional problem.\nOur result implies that there exists a 3-approximation truthfulin-expectation mechanism for the Lj-Hj setting.\nInterestingly, the proof of truthfulness is not based on supplying explicit prices, and our construction does not necessarily yield efficiently-computable prices (but the allocation rule is efficiently computable).\nOur second result applies to the twovalues setting (Lj = L, Hj = H), for which we improve both the approximation ratio and strengthen the notion of truthfulness.\nWe obtain a deterministic 2-approximation truthful mechanism (along with prices) for this problem.\nThese are the first truthful mechanisms with non-trivial performance guarantees for a multidimensional scheduling domain.\nComplementing this, we observe that even this seemingly simple setting does not admit truthful mechanisms that return an optimal schedule (unlike in the case of related machines).\nBy exploiting the multidimensionality of the domain, we prove that no truthful deterministic mechanism can obtain an approximation ratio better than 1.14 to the makespan (irrespective of computational considerations).\nThe main technique, and one of the novelties, underlying our constructions and proofs, is that we do not rely on explicit price specifications in order to prove the truthfulness of our mechanisms.\nInstead we exploit certain algorithmic monotonicity conditions that characterize truthfulness to first design an implementable algorithm, i.e., an algorithm for which prices ensuring truthfulness exist, and then find these prices (by further delving into the proof of implementability).\nThis kind of analysis has been the method of choice in the design of truthful mechanisms for singledimensional domains, where value-monotonicity yields a convenient characterization enabling one to concentrate on the algorithmic side of the problem (see, e.g., [3, 7, 4, 1, 13]).\nBut for multidimensional domains, almost all positive results have relied on explicit price specifications in order to prove truthfulness (an exception is the work on unknown single-minded players in combinatorial auctions [17, 7]), a fact that yet again shows the gap in our understanding of multidimensional vs. single-dimensional domains.\nOur work is the first to leverage monotonicity conditions for truthful mechanism design in arbitrary domains.\nThe monotonicity condition we use, which is sometimes called cycle monotonicity, was first proposed by Rochet [23] (see also [11]).\nIt is a generalization of value-monotonicity and completely characterizes truthfulness in every domain.\nOur methods and analyses demonstrate the potential benefits 253 of this characterization, and show that cycle monotonicity can be effectively utilized to devise truthful mechanisms for multidimensional domains.\nConsider, for example, our first result showing that any c-approximation algorithm can be exported to a 3c-approximation truthful-in-expectation mechanism.\nAt the level of generality of an arbitrary approximation algorithm, it seems unlikely that one would be able to come up with prices to prove truthfulness of the constructed mechanism.\nBut, cycle monotonicity does allow us to prove such a statement.\nIn fact, some such condition based only on the underlying algorithm (and not on the prices) seems necessary to prove such a general statement.\nThe method for converting approximation algorithms into truthful mechanisms involves another novel idea.\nOur randomized mechanism is obtained by first constructing a truthful mechanism that returns a fractional schedule.\nMoving to a fractional domain allows us to plug-in truthfulness into the approximation algorithm in a rather simple fashion, while losing a factor of 2 in the approximation ratio.\nWe then use a suitable randomized rounding procedure to convert the fractional assignment into a random integral assignment.\nFor this, we use a recent rounding procedure of Kumar et al. [14] that is tailored for unrelated-machine scheduling.\nThis preserves truthfulness, but we lose another additive factor equal to the approximation ratio.\nOur construction uses and extends some observations of Lavi and Swamy [16], and further demonstrates the benefits of fractional mechanisms in truthful mechanism design.\nRelated Work Nisan and Ronen [22] first considered the makespan-minimization problem for unrelated machines.\nThey gave an m-approximation positive result and proved various lower bounds.\nRecently, Mu``alem and Schapira [20] proved a lower bound of 2 on the approximation ratio achievable by truthful-in-expectation mechanisms, and Christodoulou, Koutsoupias, and Vidali [8] proved a (1 + \u221a 2)-lower bound for deterministic truthful mechanisms.Archer and Tardos [3] first considered the related-machines problem and gave a 3-approximation truthful-in-expectation mechanism.\nThis been improved in [2, 4, 1, 13] to: a 2-approximation randomized mechanism [2]; an FPTAS for any fixed number of machines given by Andelman, Azar and Sorani [1], and a 3-approximation deterministic mechanism by Kov\u00b4acs [13].\nThe algorithmic problem (i.e., without requiring truthfulness) of makespan-minimization on unrelated machines is well understood and various 2-approximation algorithms are known.\nLenstra, Shmoys and Tardos [18] gave the first such algorithm.\nShmoys and Tardos [25] later gave a 2approximation algorithm for the generalized assignment problem, a generalization where there is a cost cij for assigning a job j to a machine i, and the goal is to minimize the cost subject to a bound on the makespan.\nRecently, Kumar, Marathe, Parthasarathy, and Srinivasan [14] gave a randomized rounding algorithm that yields the same bounds.\nWe use their procedure in our randomized mechanism.\nThe characterization of truthfulness for arbitrary domains in terms of cycle monotonicity seems to have been first observed by Rochet [23] (see also Gui et al. [11]).\nThis generalizes the value-monotonicity condition for single-dimensional domains which was given by Myerson [21] and rediscovered by [3].\nAs mentioned earlier, this condition has been exploited numerous times to obtain truthful mechanisms for single-dimensional domains [3, 7, 4, 1, 13].\nFor convex domains (i.e., each players'' set of private values is convex), it is known that cycle monotonicity is implied by a simpler condition, called weak monotonicity [15, 6, 24].\nBut even this simpler condition has not found much application in truthful mechanism design for multidimensional problems.\nObjectives other than social-welfare maximization and revenue maximization have received very little attention in mechanism design.\nIn the context of combinatorial auctions, the problems of maximizing the minimum value received by a player, and computing an envy-minimizing allocation have been studied briefly.\nLavi, Mu``alem, and Nisan [15] showed that the former objective cannot be implemented truthfully; Bezakova and Dani [5] gave a 0.5-approximation mechanism for two players with additive valuations.\nLipton et al. [19] showed that the latter objective cannot be implemented truthfully.\nThese lower bounds were strengthened in [20].\n2.\nPRELIMINARIES 2.1 The scheduling domain In our scheduling problem, we are given n jobs and m machines, and each job must be assigned to exactly one machine.\nIn the unrelated-machines setting, each machine i is characterized by a vector of processing times (pij)j, where pij \u2208 R\u22650 \u222a {\u221e} denotes i``s processing time for job j with the value \u221e specifying that i cannot process j.\nWe consider two special cases of this problem: 1.\nThe job-dependent two-values case, where pij \u2208 {Lj, Hj} for every i, j, with Lj \u2264 Hj, and the values Lj, Hj are known.\nThis generalizes the classic scheduling model of restricted machines, where Hj = \u221e.\n2.\nThe two-values case, which is a special case of above where Lj = L and Hj = H for all jobs j, i.e., pij \u2208 {L, H} for every i, j.\nWe say that a job j is low on machine i if pij = Lj, and high if pij = Hj.\nWe will use the terms schedule and assignment interchangeably.\nWe represent a deterministic schedule by a vector x = (xij)i,j, where xij is 1 if job j is assigned to machine i, thus we have xij \u2208 {0, 1} for every i, j, P i xij = 1 for every job j.\nWe will also consider randomized algorithms and algorithms that return a fractional assignment.\nIn both these settings, we will again specify an assignment by a vector x = (xij)i,j with P j xij = 1, but now xij \u2208 [0, 1] for every i, j. For a randomized algorithm, xij is simply the probability that j is assigned to i (thus, x is a convex combination of integer assignments).\nWe denote the load of machine i (under a given assignment) by li = P j xijpij, and the makespan of a schedule is defined as the maximum load on any machine, i.e., maxi li.\nThe goal in the makespan-minimization problem is to assign the jobs to the machines so as to minimize the makespan of the schedule.\n2.2 Mechanism design We consider the makespan-minimization problem in the above scheduling domains in the context of mechanism design.\nMechanism design studies strategic settings where the social designer needs to ensure the cooperation of the different entities involved in the algorithmic procedure.\nFollowing the work of Nisan and Ronen [22], we consider the machines to be the strategic players or agents.\nThe social designer holds the set of jobs that need to be assigned, but does 254 not know the (true) processing times of these jobs on the different machines.\nEach machine is a selfish entity, that privately knows its own processing time for each job.\non a machine incurs a cost to the machine equal to the true processing time of the job on the machine, and a machine may choose to misrepresent its vector of processing times, which are private, in order to decrease its cost.\nWe consider direct-revelation mechanisms: each machine reports its (possibly false) vector of processing times, the mechanism then computes a schedule and hands out payments to the players (i.e., machines) to compensate them for the cost they incur in processing their assigned jobs.\nA (direct-revelation) mechanism thus consists of a tuple (x, P): x specifies the schedule, and P = {Pi} specifies the payments handed out to the machines, where both x and the Pis are functions of the reported processing times p = (pij)i,j.\nThe mechanism``s goal is to compute a schedule that has near-optimal makespan with respect to the true processing times; a machine i is however only interested in maximizing its own utility, Pi \u2212 li, where li is its load under the output assignment, and may declare false processing times if this could increase its utility.\nThe mechanism must therefore incentivize the machines/players to truthfully reveal their processing times via the payments.\nThis is made precise using the notion of dominant-strategy truthfulness.\nDefinition 2.1 (Truthfulness) A scheduling mechanism is truthful if, for every machine i, every vector of processing times of the other machines, p\u2212i, every true processing-time vector p1 i and any other vector p2 i of machine i, we have: P1 i \u2212 X j x1 ijp1 ij \u2265 P2 i \u2212 X j x2 ijp1 ij, (1) where (x1 , P1 ) and (x2 , P2 ) are respectively the schedule and payments when the other machines declare p\u2212i and machine i declares p1 i and p2 i , i.e., x1 = x(p1 i , p\u2212i), P1 i = Pi(p1 i , p\u2212i) and x2 = x(p2 i , p\u2212i), P2 i = Pi(p2 i , p\u2212i).\nTo put it in words, in a truthful mechanism, no machine can improve its utility by declaring a false processing time, no matter what the other machines declare.\nWe will also consider fractional mechanisms that return a fractional assignment, and randomized mechanisms that are allowed to toss coins and where the assignment and the payments may be random variables.\nThe notion of truthfulness for a fractional mechanism is the same as in Definition 2.1, where x1 , x2 are now fractional assignments.\nFor a randomized mechanism, we will consider the notion of truthfulness in expectation [3], which means that a machine (player) maximizes her expected utility by declaring her true processing-time vector.\nInequality (1) also defines truthfulness-in-expectation for a randomized mechanism, where P1 i , P2 i now denote the expected payments made to player i, x1 , x2 are the fractional assignments denoting the randomized algorithm``s schedule (i.e., xk ij is the probability that j is assigned to i in the schedule output for (pk i , p\u2212i)).\nFor our two scheduling domains, the informational assumption is that the values Lj, Hj are publicly known.\nThe private information of a machine is which jobs have value Lj (or L) and which ones have value Hj (or H) on it.\nWe emphasize that both of our domains are multidimensional, since each machine i needs to specify a vector saying which jobs are low and high on it.\n3.\nCYCLE MONOTONICITY Although truthfulness is defined in terms of payments, it turns out that truthfulness actually boils down to a certain algorithmic condition of monotonicity.\nThis seems to have been first observed for multidimensional domains by Rochet [23] in 1987, and has been used successfully in algorithmic mechanism design several times, but for singledimensional domains.\nHowever for multidimensional domains, the monotonicity condition is more involved and there has been no success in employing it in the design of truthful mechanisms.\nMost positive results for multidimensional domains have relied on explicit price specifications in order to prove truthfulness.\nOne of the main contributions of this paper is to demonstrate that the monotonicity condition for multidimensional settings, which is sometimes called cycle monotonicity, can indeed be effectively utilized to devise truthful mechanisms.\nWe include a brief exposition on it for completeness.\nThe exposition here is largely based on [11].\nCycle monotonicity is best described in the abstract social choice setting: there is a finite set A of alternatives, there are m players, and each player has a private type (valuation function) v : A \u2192 R, where vi(a) should be interpreted as i``s value for alternative a.\nIn the scheduling domain, A represents all the possible assignments of jobs to machines, and vi(a) is the negative of i``s load in the schedule a. Let Vi denote the set of all possible types of player i.\nA mechanism is a tuple (f, {Pi}) where f : V1 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 Vm \u2192 A is the algorithm for choosing the alternative, and Pi : V1 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 Vm \u2192 A is the price charged to player i (in the scheduling setting, the mechanism pays the players, which corresponds to negative prices).\nThe mechanism is truthful if for every i, every v\u2212i \u2208 V\u2212i = Q i =i Vi , and any vi, vi \u2208 Vi we have vi(a) \u2212 Pi(vi, v\u2212i) \u2265 vi(b) \u2212 Pi(vi, v\u2212i), where a = f(vi, v\u2212i) and b = f(vi, v\u2212i).\nA basic question that arises is given an algorithm f : V1 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 Vm \u2192 A, do there exist prices that will make the resulting mechanism truthful?\nIt is well known (see e.g. [15]) that the price Pi can only depend on the alternative chosen and the others'' declarations, that is, we may write Pi : V\u2212i \u00d7 A \u2192 R. Thus, truthfulness implies that for every i, every v\u2212i \u2208 V\u2212i, and any vi, vi \u2208 Vi with f(vi, v\u2212i) = a and f(vi, v\u2212i) = b, we have vi(a) \u2212 Pi(a, v\u2212i) \u2265 vi(b) \u2212 Pi(b, v\u2212i).\nNow fix a player i, and fix the declarations v\u2212i of the others.\nWe seek an assignment to the variables {Pa}a\u2208A such that vi(a) \u2212 vi(b) \u2265 Pa \u2212 Pb for every a, b \u2208 A and vi \u2208 Vi with f(vi, v\u2212i) = a. (Strictly speaking, we should use A = f(Vi, v\u2212i) instead of A here.)\nDefine \u03b4a,b := inf{vi(a)\u2212 vi(b) : vi \u2208 Vi, f(vi, v\u2212i) = a}.\nWe can now rephrase the above price-assignment problem: we seek an assignment to the variables {Pa}a\u2208A such that Pa \u2212 Pb \u2264 \u03b4a,b \u2200a, b \u2208 A (2) This is easily solved by looking at the allocation graph and applying a standard basic result of graph theory.\nDefinition 3.1 (Gui et al. [11]) The allocation graph of f is a directed weighted graph G = (A, E) where E = A \u00d7 A and the weight of an edge b \u2192 a (for any a, b \u2208 A) is \u03b4a,b. Theorem 3.2 There exists a feasible assignment to (2) iff the allocation graph has no negative-length cycles.\nFurthermore, if all cycles are non-negative, a feasible assignment is 255 obtained as follows: fix an arbitrary node a\u2217 \u2208 A and set Pa to be the length of the shortest path from a\u2217 to a.\nThis leads to the following definition, which is another way of phrasing the condition that the allocation graph have no negative cycles.\nDefinition 3.3 (Cycle monotonicity) A social choice function f satisfies cycle monotonicity if for every player i, every v\u2212i \u2208 V\u2212i, every integer K, and every v1 i , ... , vK i \u2208 Vi, KX k=1 h vk i (ak) \u2212 vk i (ak+1) i \u2265 0 where ak = f(vk i , v\u2212i) for 1 \u2264 k \u2264 K, and aK+1 = a1.\nCorollary 3.4 There exist prices P such that the mechanism (f, P) is truthful iff f satisfies cycle monotonicity.1 We now consider our specific scheduling domain.\nFix a player i, p\u2212i, and any p1 i , ... , pK i .\nLet x(pk i , p\u2212i) = xk for 1 \u2264 k \u2264 K, and let xK+1 = x1 , pK+1 = p1 .\nxk could be a {0, 1}-assignment or a fractional assignment.\nWe have vk i (xk ) = \u2212 P j xk ijpk ij, so cycle monotonicity translates to PK k=1 \u02c6 \u2212 P j xk ijpk ij + P j xk+1 ij pk ij \u02dc \u2265 0.\nRearranging, we get KX k=1 X j xk+1 ij ` pk ij \u2212 pk+1 ij \u00b4 \u2265 0.\n(3) Thus (3) reduces our mechanism design problem to a concrete algorithmic problem.\nFor most of this paper, we will consequently ignore any strategic considerations and focus on designing an approximation algorithm for minimizing makespan that satisfies (3).\n4.\nA GENERAL TECHNIQUE TO OBTAIN RANDOMIZED MECHANISMS In this section, we consider the case of job-dependent Lj, Hj values (with Lj \u2264 Hj), which generalizes the classical restricted-machines model (where Hj = \u221e).\nWe show the power of randomization, by providing a general technique that converts any c-approximation algorithm into a 3c-approximation, truthful-in-expectation mechanism.\nThis is one of the few results that shows how to export approximation algorithms for a multidimensional problem into truthful mechanisms when the algorithm is given as a black box.\nOur construction and proof are simple, and based on two ideas.\nFirst, as outlined above, we prove truthfulness using cycle monotonicity.\nIt seems unlikely that for an arbitrary approximation algorithm given only as a black box, one would be able to come up with payments in order to prove truthfulness; but cycle-monotonicity allows us to prove precisely this.\nSecond, we obtain our randomized mechanism by (a) first moving to a fractional domain, and constructing a fractional truthful mechanism that is allowed to return fractional assignments; then (b) using a rounding procedure to express the fractional schedule as a convex combination of integer schedules.\nThis builds upon a theme introduced by Lavi and Swamy [16], namely that of using fractional mechanisms to obtain truthful-in-expectation mechanisms.\n1 It is not clear if Theorem 3.2, and hence, this statement, hold if A is not finite.\nWe should point out however that one cannot simply plug in the results of [16].\nTheir results hold for social-welfaremaximization problems and rely on using VCG to obtain a fractional truthful mechanism.\nVCG however does not apply to makespan minimization, and in our case even the existence of a near-optimal fractional truthful mechanism is not known.\nWe use the following result adapted from [16].\nLemma 4.1 (Lavi and Swamy [16]) Let M = (x, P) be a fractional truthful mechanism.\nLet A be a randomized rounding algorithm that given a fractional assignment x, outputs a random assignment X such that E \u02c6 Xij \u02dc = xij for all i, j.\nThen there exist payments P such that the mechanism M = (A, P ) is truthful in expectation.\nFurthermore, if M is individually rational then M is individually rational for every realization of coin tosses.\nLet OPT(p) denote the optimal makespan (over integer schedules) for instance p.\nAs our first step, we take a capproximation algorithm and convert it to a 2c-approximation fractional truthful mechanism.\nThis conversion works even when the approximation algorithm returns only a fractional schedule (satisfying certain properties) of makespan at most c \u00b7 OPT(p) for every instance p.\nWe prove truthfulness by showing that the fractional algorithm satisfies cycle monotonicity (3).\nNotice that the alternative-set of our fractional mechanism is finite (although the set of all fractional assignments is infinite): its cardinality is at most that of the inputdomain, which is at most 2mn in the two-value case.\nThus, we can apply Corollary 3.4 here.\nTo convert this fractional truthful mechanism into a randomized truthful mechanism we need a randomized rounding procedure satisfying the requirements of Lemma 4.1.\nFortunately, such a procedure is already provided by Kumar, Marathe, Parthasarathy, and Srinivasan [14].\nLemma 4.2 (Kumar et al. [14]) Given a fractional assignment x and a processing time vector p, there exists a randomized rounding procedure that yields a (random) assignment X such that, 1.\nfor any i, j, E \u02c6 Xij \u02dc = xij.\n2.\nfor any i, P j Xijpij < P j xijpij + max{j:xij \u2208(0,1)} pij with probability 1.\nProperty 1 will be used to obtain truthfulness in expectation, and property 2 will allow us to prove an approximation guarantee.\nWe first show that any algorithm that returns a fractional assignment having certain properties satisfies cycle monotonicity.\nLemma 4.3 Let A be an algorithm that for any input p, outputs a (fractional) assignment x such that, if pij = Hj then xij \u2264 1/m, and if pij = Lj then xij \u2265 1/m.\nThen A satisfies cycle-monotonicity.\nProof.\nFix a player i, and the vector of processing times of the other players p\u2212i.\nWe need to prove (3), that is, PK k=1 P j xk+1 ij ` pk ij \u2212 pk+1 ij \u00b4 \u2265 0 for every p1 i , ... , pK i , where index k = K + 1 is taken to be k = 1.\nWe will show that for every job j, PK k=1 xk+1 ij ` pk ij \u2212 pk+1 ij \u00b4 \u2265 0.\nIf pk ij is the same for all k (either always Lj or always Hj), then the above inequality clearly holds.\nOtherwise we can 256 divide the indices 1, ... , K, into maximal segments, where a maximal segment is a maximal set of consecutive indices k , k + 1, ... , k \u2212 1, k (where K + 1 \u2261 1) such that pk ij = Hj \u2265 pk +1 ij \u2265 \u00b7 \u00b7 \u00b7 \u2265 pk ij = Lj.\nThis follows because there must be some k such that pk ij = Hj > pk\u22121 ij = Lj.\nWe take k = k and then keep including indices in this segment till we reach a k such that pk ij = Lj and pk+1 ij = Hj.\nWe set k = k, and then start a new maximal segment with index k + 1.\nNote that k = k and k + 1 = k \u2212 1.\nWe now have a subset of indices and we can continue recursively.\nSo all indices are included in some maximal segment.\nWe will show that for every such maximal segment k , k +1, ... , k ,P k \u22121\u2264k 0 implies that pij \u2264 T, where T is the makespan of x. (In particular, note that any algorithm that returns an integral assignment has these properties.)\nOur algorithm, which we call A , returns the following assignment xF .\nInitialize xF ij = 0 for all i, j. For every i, j, 1.\nif pij = Hj, set xF ij = P i :pi j =Hj xi j/m; 2.\nif pij = Lj, set xF ij = xij + P i =i:pi j =Lj (xi j \u2212xij)/m+ P i :pi j =Hj xi j/m.\nTheorem 4.4 Suppose algorithm A satisfies the conditions in Algorithm 1 and returns a makespan of at most c\u00b7OPT(p) for every p. Then, the algorithm A constructed above is a 2c-approximation, cycle-monotone fractional algorithm.\nMoreover, if xF ij > 0 on input p, then pij \u2264 c \u00b7 OPT(p).\nProof.\nFirst, note that xF is a valid assignment: for every job j, P i xF ij = P i xij + P i,i =i:pij =pi j =Lj (xi j \u2212 xij)/m = P i xij = 1.\nWe also have that if pij = Hj then xF ij = P i :pi j =Hj xi j/m \u2264 1/m.\nIf pij = Lj, then xF ij = xij(1 \u2212 /m) + P i =i xi j/m where = |{i = i : pi j = Lj}| \u2264 m \u2212 1; so xF ij \u2265 P i xi j/m \u2265 1/m.\nThus, by Lemma 4.3, A satisfies cycle monotonicity.\nThe total load on any machine i under xF is at mostP j:pij =Hj P i :pi j =Hj Hj\u00b7 xi j m + P j:pij =Lj Lj ` xij+ P i =i xi j m \u00b4 , which is at most P j pijxij + P i =i P j pi jxi j/m \u2264 2c \u00b7 OPT(p).\nFinally, if xF ij > 0 and pij = Lj, then pij \u2264 OPT(p).\nIf pij = Hj, then for some i (possibly i) with pi j = Hj we have xi j > 0, so by assumption, pi j = Hj = pij \u2264 c \u00b7 OPT(p).\nTheorem 4.4 combined with Lemmas 4.1 and 4.2, gives a 3c-approximation, truthful-in-expectation mechanism.\nThe computation of payments will depend on the actual approximation algorithm used.\nSection 3 does however give an explicit procedure to compute payments ensuring truthfulness, though perhaps not in polynomial-time.\nTheorem 4.5 The procedure in Algorithm 1 converts any c-approximation fractional algorithm into a 3c-approximation, truthful-in-expectation mechanism.\nTaking A in Algorithm 1 to be the algorithm that returns an LP-optimum assignment satisfying the required conditions (see [18, 25]), we obtain a 3-approximation mechanism.\nCorollary 4.6 There is a truthful-in-expectation mechanism with approximation ratio 3 for the Lj-Hj setting.\n5.\nA DETERMINISTIC MECHANISM FOR THE TWO-VALUES CASE We now present a deterministic 2-approximation truthful mechanism for the case where pij \u2208 {L, H} for all i, j.\nIn the sequel, we will often say that j is assigned to a lowmachine to denote that j is assigned to a machine i where pij = L.\nWe will call a job j a low job of machine i if pij = L; the low-load of i is the load on i due to its low jobs, i.e., P j:pij =L xijpij.\nAs in Section 4, our goal is to obtain an approximation algorithm that satisfies cycle monotonicity.\nWe first obtain a simplification of condition (3) for our two-values {L, H} scheduling domain (Proposition 5.1) that will be convenient to work with.\nWe describe our algorithm in Section 5.1.\nIn Section 5.2, we bound its approximation guarantee and prove that it satisfies cycle-monotonicity.\nIn Section 5.3, we compute explicit payments giving a truthful mechanism.\nFinally, in Section 5.4 we show that no deterministic mechanism can achieve the optimum makespan.\nDefine nk, H = \u02db \u02db{j : xk ij = 1, pk ij = L, pij = H} \u02db \u02db (4) nk, L = \u02db \u02db{j : xk ij = 1, pk ij = H, pij = L} \u02db \u02db.\n(5) Then, P j xk+1 ij (pk ij \u2212 pk+1 ij ) = (nk+1,k H \u2212 nk+1,k L )(H \u2212 L).\nPlugging this into (3) and dividing by (H \u2212 L), we get the following.\nProposition 5.1 Cycle monotonicity in the two-values scheduling domain is equivalent to the condition that, for every player i, every p\u2212i, every integer K, and every p1 i , ... , pK i , KX k=1 ` nk+1,k H \u2212 nk+1,k L \u00b4 \u2265 0.\n(6) 257 5.1 Acycle-monotone approximation algorithm We now describe an algorithm that satisfies condition (6) and achieves a 2-approximation.\nWe will assume that L, H are integers, which is without loss of generality.\nA core component of our algorithm will be a procedure that takes an integer load threshold T and computes an integer partial assignment x of jobs to machines such that (a) a job is only assigned to a low machine; (b) the load on any machine is at most T; and (c) the number of jobs assigned is maximized.\nSuch an assignment can be computed by solving a max-flow problem: we construct a directed bipartite graph with a node for every job j and every machine i, and an edge (j, i) of infinite capacity if pij = L.\nWe also add a source node s with edges (s, j) having capacity 1, and sink node t with edges (i, t) having capacity T/L .\nClearly any integer flow in this network corresponds to a valid integer partial assignment x of makespan at most T, where xij = 1 iff there is a flow of 1 on the edge from j to i.\nWe will therefore use the terms assignment and flow interchangeably.\nMoreover, there is always an integral max-flow (since all capacities are integers).\nWe will often refer to such a max-flow as the max-flow for (p, T).\nWe need one additional concept before describing the algorithm.\nThere could potentially be many max-flows and we will be interested in the most balanced ones, which we formally define as follows.\nFix some max-flow.\nLet ni p,T be the amount of flow on edge (i, t) (or equivalently the number of jobs assigned to i in the corresponding schedule), and let np,T be the total size of the max-flow, i.e., np,T = P i ni p,T .\nFor any T \u2264 T, define ni p,T |T = min(ni p,T , T ), that is, we truncate the flow/assignment on i so that the total load on i is at most T .\nDefine np,T |T = P i ni p,T |T .\nWe define a prefix-maximal flow or assignment for T as follows.\nDefinition 5.2 (Prefix-maximal flow) A flow for the above network with threshold T is prefix-maximal if for every integer T \u2264 T, we have np,T |T = np,T .\nThat is, in a prefix-maximal flow for (p, T), if we truncate the flow at some T \u2264 T, we are left with a max-flow for (p, T ).\nAn elementary fact about flows is that if an assignment/flow x is not a maximum flow for (p, T) then there must be an augmenting path P = (s, j1, i1, ... , jK , iK , t) in the residual graph that allows us to increase the size of the flow.\nThe interpretation is that in the current assignment, j1 is unassigned, xi j = 0, which is denoted by the forward edges (j , i ), and xi j +1 = 1, which is denoted by the reverse edges (i , j +1).\nAugmenting x using P changes the assignment so that each j is assigned to i in the new assignment, which increases the value of the flow by 1.\nA simple augmenting path does not decrease the load of any machine; thus, one can argue that a prefix-maximal flow for a threshold T always exists.\nWe first compute a max-flow for threshold 1, use simple augmenting paths to augment it to a max-flow for threshold 2, and repeat, each time augmenting the max-flow for the previous threshold t to a max-flow for threshold t + 1 using simple augmenting paths.\nAlgorithm 2 Given a vector of processing times p, construct an assignment of jobs to machines as follows.\n1.\nCompute T\u2217 (p) = min \u02d8 T \u2265 H, T multiple of L : np,T \u00b7 L + (n \u2212 np,T ) \u00b7 H \u2264 m \u00b7 T \u00af .\nNote that np,T \u00b7L+(n\u2212np,T )\u00b7H \u2212m\u00b7T is a decreasing function of T, so T\u2217 (p) can be computed in polynomial time via binary search.\n2.\nCompute a prefix-maximal flow for threshold T\u2217 (p) and the corresponding partial assignment (i.e., j is assigned to i iff there is 1 unit of flow on edge (j, i)).\n3.\nAssign the remaining jobs, i.e., the jobs unassigned in the flow-phase, in a greedy manner as follows.\nConsider these jobs in an arbitrary order and assign each job to the machine with the current lowest load (where the load includes the jobs assigned in the flow-phase).\nOur algorithm needs to compute a prefix-maximal assignment for the threshold T\u2217 (p).\nThe proof showing the existence of a prefix-maximal flow only yields a pseudopolynomial time algorithm for computing it.\nBut notice that the max-flow remains the same for any T \u2265 T = n \u00b7 L.\nSo a prefix-maximal flow for T is also prefix-maximal for any T \u2265 T .\nThus, we only need to compute a prefix-maximal flow for T = min{T\u2217 (p), T }.\nThis can be be done in polynomial time by using the iterative-augmenting-paths algorithm in the existence proof to compute iteratively the maxflow for the polynomially many multiples of L up to (and including) T .\nTheorem 5.3 One can efficiently compute payments that when combined with Algorithm 2 yield a deterministic 2approximation truthful mechanism for the two-values scheduling domain.\n5.2 Analysis Let OPT(p) denote the optimal makespan for p.\nWe now prove that Algorithm 2 is a 2-approximation algorithm that satisfies cycle monotonicity.\nThis will then allow us to compute payments in Section 5.3 and prove Theorem 5.3.\n5.2.1 Proof of approximation Claim 5.4 If OPT(p) < H, the makespan is at most OPT(p).\nProof.\nIf OPT(p) < H, it must be that the optimal schedule assigns all jobs to low machines, so np,OPT(p) = n. Thus, we have T\u2217 (p) = L \u00b7 H L .\nFurthermore, since we compute a prefix-maximal flow for threshold T\u2217 (p) we have np,T \u2217(p)|OPT(p) = np,OPT(p) = n, which implies that the load on each machine is at most OPT(p).\nSo in this case the makespan is at most (and hence exactly) OPT(p).\nClaim 5.5 If OPT(p) \u2265 H, then T\u2217 (p) \u2264 L \u00b7 OPT(p) L \u2264 OPT(p) + L. Proof.\nLet nOPT(p) be the number of jobs assigned to low machines in an optimum schedule.\nThe total load on all machines is exactly nOPT(p) \u00b7 L + (n \u2212 nOPT(p)) \u00b7 H, and is at most m \u00b7 OPT(p), since every machine has load at most OPT(p).\nSo taking T = L \u00b7 OPT(p) L \u2265 H, since np,T \u2265 nOPT(p) we have that np,T \u00b7L+(n\u2212np,T )\u00b7H \u2264 m\u00b7T. Hence, T\u2217 (p), the smallest such T, is at most L \u00b7 OPT(p) L .\nClaim 5.6 Each job assigned in step 3 of the algorithm is assigned to a high machine.\n258 Proof.\nSuppose j is assigned to machine i in step 3.\nIf pij = L, then we must have ni p,T \u2217(p) = T\u2217 (p), otherwise we could have assigned j to i in step 2 to obtain a flow of larger value.\nSo at the point just before j is assigned in step 3, the load of each machine must be at least T\u2217 (p).\nHence, the total load after j is assigned is at least m \u00b7 T\u2217 (p) + L > m \u00b7 T\u2217 (p).\nBut the total load is also at most np,T \u2217(p) \u00b7 L + (n \u2212 np,T \u2217(p)) \u00b7 H \u2264 m \u00b7 T\u2217 (p), yielding a contradiction.\nLemma 5.7 The above algorithm returns a schedule with makespan at most OPT(p)+max \u02d8 L, H(1\u2212 1 m ) \u00af \u2264 2\u00b7OPT(p).\nProof.\nIf OPT(p) < H, then by Claim 5.4, we are done.\nSo suppose OPT(p) \u2265 H. By Claim 5.5, we know that T\u2217 (p) \u2264 OPT(p) + L.\nIf there are no unassigned jobs after step 2 of the algorithm, then the makespan is at most T\u2217 (p) and we are done.\nSo assume that there are some unassigned jobs after step 2.\nWe will show that the makespan after step 3 is at most T +H ` 1\u2212 1 m \u00b4 where T = min \u02d8 T\u2217 (p), OPT(p) \u00af .\nSuppose the claim is false.\nLet i be the machine with the maximum load, so li > T + H ` 1 \u2212 1 m \u00b4 .\nLet j be the last job assigned to i in step 3, and consider the point just before it is assigned to i.\nSo li > T \u2212 H/m at this point.\nAlso since j is assigned to i, by our greedy rule, the load on all the other machines must be at least li.\nSo the total load after j is assigned, is at least H + m \u00b7 li > m \u00b7 T (since pij = H by Claim 5.6).\nAlso, for any assignment of jobs to machines in step 3, the total load is at most np,T \u2217(p) \u00b7 L + (n \u2212 np,T \u2217(p)) \u00b7 H since there are np,T \u2217(p) jobs assigned to low machines.\nTherefore, we must have m \u00b7 T < np,T \u2217(p) \u00b7 L + (n \u2212 np,T \u2217(p)) \u00b7 H.\nBut we will argue that m \u00b7 T \u2265 np,T \u2217(p) \u00b7L+(n\u2212np,T \u2217(p))\u00b7H, which yields a contradiction.\nIf T = T\u2217 (p), this follows from the definition of T\u2217 (p).\nIf T = OPT(p), then letting nOPT(p) denote the number of jobs assigned to low machines in an optimum schedule, we have np,T \u2217(p) \u2265 nOPT(p).\nSo np,T \u2217(p) \u00b7L+(n\u2212np,T \u2217(p))\u00b7H \u2264 nOPT(p) \u00b7L+(n\u2212nOPT(p))\u00b7H.\nThis is exactly the total load in an optimum schedule, which is at most m \u00b7 OPT(p).\n5.2.2 Proof of cycle monotonicity Lemma 5.8 Consider any two instances p = (pi, p\u2212i) and p = (pi, p\u2212i) where pi \u2265 pi, i.e., pij \u2265 pij \u2200j.\nIf T is a threshold such that np,T > np ,T , then every maximum flow x for (p , T) must assign all jobs j such that pij = L. Proof.\nLet Gp denote the residual graph for (p , T) and flow x .\nSuppose by contradiction that there exists a job j\u2217 with pij\u2217 = L that is unassigned by x .\nSince pi \u2265 pi, all edges (j, i) that are present in the network for (p , T) are also present in the network for (p, T).\nThus, x is a valid flow for (p, T).\nBut it is not a max-flow, since np,T > np ,T .\nSo there exists an augmenting path P in the residual graph for (p, T) and flow x .\nObserve that node i must be included in P, otherwise P would also be an augmenting path in the residual graph Gp contradicting the fact that x is a maxflow.\nIn particular, this implies that there is a path P \u2282 P from i to the sink t. Let P = (i, j1, i1, ... , jK , iK , t).\nAll the edges of P are also present as edges in Gp - all reverse edges (i , j +1) are present since such an edge implies that xi j +1 = 1; all forward edges (j , i ) are present since i = i so pi j = pi j = L, and xi j +1 = 0.\nBut then there is an augmenting path (j\u2217 , i, j1, i1, ... , jK , iK , t) in Gp which contradicts the maximality of x .\nLet L denote the all-low processing time vector.\nDefine TL i (p\u2212i) = T\u2217 (L, p\u2212i).\nSince we are focusing on machine i, and p\u2212i is fixed throughout, we abbreviate TL i (p\u2212i) to TL .\nAlso, let pL = (L, p\u2212i).\nNote that T\u2217 (p) \u2265 TL for every instance p = (pi, p\u2212i).\nCorollary 5.9 Let p = (pi, p\u2212i) be any instance and let x be any prefix-maximal flow for (p, T\u2217 (p)).\nThen, the low-load on machine i is at most TL .\nProof.\nLet T\u2217 = T\u2217 (p).\nIf T\u2217 = TL , then this is clearly true.\nOtherwise, consider the assignment x truncated at TL .\nSince x is prefix-maximal, we know that this constitutes a max-flow for (p, TL ).\nAlso, np,T L < npL,T L because T\u2217 > TL .\nSo by Lemma 5.8, this truncated flow must assign all the low jobs of i. Hence, there cannot be a job j with pij = L that is assigned to i after the TL -threshold since then j would not be assigned by this truncated flow.\nThus, the low-load of i is at most TL .\nUsing these properties, we will prove the following key inequality: for any p1 = (p\u2212i, p1 i ) and p2 = (p\u2212i, p2 i ), np1,T L \u2265 np2,T L \u2212 n2,1 H + n2,1 L (7) where n2,1 H and n2,1 L are as defined in (4) and (5), respectively.\nNotice that this immediately implies cycle monotonicity, since if we take p1 = pk and p2 = pk+1 , then (7) implies that npk,T L \u2265 npk+1,T L \u2212 nk+1,k H + nk+1,k L ; summing this over all k = 1, ... , K gives (6).\nLemma 5.10 If T\u2217 (p1 ) > TL , then (7) holds.\nProof.\nLet T1 = T\u2217 (p1 ) and T2 = T\u2217 (p2 ).\nTake the prefix-maximal flow x2 for (p2 , T2 ), truncate it at TL , and remove all the jobs from this assignment that are counted in n2,1 H , that is, all jobs j such that x2 ij = 1, p2 ij = L, p1 ij = H. Denote this flow by x. Observe that x is a valid flow for (p1 , TL ), and the size of this flow is exactly np2,T 2 |T L \u2212n2,1 H = np2,T L \u2212n2,1 H .\nAlso none of the jobs that are counted in n2,1 L are assigned by x since each such job j is high on i in p2 .\nSince T1 > TL , we must have np1,T L < npL,T L .\nSo if we augment x to a max-flow for (p1 , TL ), then by Lemma 5.8 (with p = pL and p = p1 ), all the jobs corresponding to n2,1 L must be assigned in this max-flow.\nThus, the size of this max-flow is at least (size of x) + n2,1 L , that is, np1,T L \u2265 np2,T L \u2212 n2,1 H + n2,1 L , as claimed.\nLemma 5.11 Suppose T\u2217 (p1 ) = TL .\nThen (7) holds.\nProof.\nAgain let T1 = T\u2217 (p1 ) = TL and T2 = T\u2217 (p2 ).\nLet x1 , x2 be the complete assignment, i.e., the assignment after both steps 2 and 3, computed by our algorithm for p1 , p2 respectively.\nLet S = {j : x2 ij = 1 and p2 ij = L} and S = {j : x2 ij = 1 and p1 ij = L}.\nTherefore, |S | = |S| \u2212 n2,1 H + n2,1 L and |S| = ni p2,T 2 = ni p2,T 2 |T L (by Corollary 5.9).\nLet T = |S | \u00b7 L.\nWe consider two cases.\nSuppose first that T \u2264 TL .\nConsider the following flow for (p1 , TL ): assign to every machine other than i the lowassignment of x2 truncated at TL , and assign the jobs in S to machine i.\nThis is a valid flow for (p1 , TL ) since the load on i is T \u2264 TL .\nIts size is equal to P i =i ni p2,T 2 |T L +|S | = np2,T 2 |T L \u2212n2,1 H +n2,1 L = np2,T L \u2212n2,1 H +n2,1 L .\nThe size of the max-flow for (p1 , TL ) is no smaller, and the claim follows.\n259 Now suppose T > TL .\nSince |S| \u00b7 L \u2264 TL (by Corollary 5.9), it follows that n2,1 L > n2,1 H \u2265 0.\nLet \u02c6T = T \u2212 L \u2265 TL since T , TL are both multiples of L. Let M = np2,T 2 \u2212 n2,1 H + n2,1 L = |S | + P i =i ni p2,T 2 .\nWe first show that m \u00b7 \u02c6T < M \u00b7 L + (n \u2212 M) \u00b7 H. (8) Let N be the number of jobs assigned to machine i in x2 .\nThe load on machine i is |S|\u00b7L+(N \u2212|S|)\u00b7H \u2265 |S |\u00b7L\u2212n2,1 L \u00b7 L+(N\u2212|S|)\u00b7H which is at least |S |\u00b7L > \u02c6T since n2,1 L \u2264 N\u2212 |S|.\nThus we get the inequality |S |\u00b7L+(N \u2212|S |)\u00b7H > \u02c6T.\nNow consider the point in the execution of the algorithm on instance p2 just before the last high job is assigned to i in Step 3 (there must be such a job since n2,1 L > 0).\nThe load on i at this point is |S| \u00b7 L + (N \u2212 |S| \u2212 1) \u00b7 H which is least |S | \u00b7 L \u2212 L = \u02c6T by a similar argument as above.\nBy the greedy property, every i = i also has at least this load at this point, so P j p2 i jx2 i j \u2265 \u02c6T.\nAdding these inequalities for all i = i, and the earlier inequality for i, we get that |S | \u00b7 L + (N \u2212 |S |) \u00b7 H + P i =i P j p2 i jx2 i j > m \u02c6T.\nBut the left-hand-side is exactly M \u00b7 L + (n \u2212 M) \u00b7 H. On the other hand, since T1 = TL , we have m \u00b7 \u02c6T \u2265 m \u00b7 TL \u2265 np1,T L \u00b7 L + (n \u2212 np1,T L ) \u00b7 H. (9) Combining (8) and (9), we get that np1,T L > M = np2,T 2 \u2212 n2,1 H + n2,1 L \u2265 np2,T L \u2212 n2,1 H + n2,1 L .\nLemma 5.12 Algorithm 2 satisfies cycle monotonicity.\nProof.\nTaking p1 = pk and p2 = pk+1 in (7), we get that npk,T L \u2265 npk+1,T L \u2212nk+1,k H +nk+1,k L .\nSumming this over all k = 1, ... , K (where K + 1 \u2261 1) yields (6).\n5.3 Computation of prices Lemmas 5.7 and 5.12 show that our algorithm is a 2approximation algorithm that satisfies cycle monotonicity.\nThus, by the discussion in Section 3, there exist prices that yield a truthful mechanism.\nTo obtain a polynomial-time mechanism, we also need to show how to compute these prices (or payments) in polynomial-time.\nIt is not clear, if the procedure outlined in Section 3 based on computing shortest paths in the allocation graph yields a polynomial time algorithm, since the allocation graph has an exponential number of nodes (one for each output assignment).\nInstead of analyzing the allocation graph, we will leverage our proof of cycle monotonicity, in particular, inequality (7), and simply spell out the payments.\nRecall that the utility of a player is ui = Pi \u2212 li, where Pi is the payment made to player i. For convenience, we will first specify negative payments (i.e., the Pis will actually be prices charged to the players) and then show that these can be modified so that players have non-negative utilities (if they act truthfully).\nLet Hi denote the number of jobs assigned to machine i in step 3.\nBy Corollary 5.6, we know that all these jobs are assigned to high machines (according to the declared pis).\nLet H\u2212i = P i =i Hi and n\u2212i p,T = P i =i ni p,T .\nThe payment Pi to player i is defined as: Pi(p) = \u2212L \u00b7 n\u2212i p,T \u2217(p) \u2212 H \u00b7 H\u2212i (p) \u2212 (H \u2212 L) ` np,T \u2217(p) \u2212 np,T L i (p\u2212i) \u00b4 (10) We can interpret our payments as equating the player``s cost to a careful modification of the total load (in the spirit of VCG prices).\nThe first and second terms in (10), when subtracted from i``s load li equate i``s cost to the total load.\nThe term np,T \u2217(p) \u2212 np,T L i (p\u2212i) is in fact equal to n\u2212i p,T \u2217(p) \u2212 n\u2212i p,T \u2217(p)|T L i (p\u2212i) since the low-load on i is at most TL i (p\u2212i) (by Claim 5.9).\nThus the last term in equation (10) implies that we treat the low jobs that were assigned beyond the TL i (p\u2212i) threshold (to machines other than i) effectively as high jobs for the total utility calculation from i``s point of view.\nIt is not clear how one could have conjured up these payments a priori in order to prove the truthfulness of our algorithm.\nHowever, by relying on cycle monotonicity, we were not only able to argue the existence of payments, but also our proof paved the way for actually inferring these payments.\nThe following lemma explicitly verifies that the payments defined above do indeed give a truthful mechanism.\nLemma 5.13 Fix a player i and the other players'' declarations p\u2212i. Let i``s true type be p1 i .\nThen, under the payments defined in (10), i``s utility when she declares her true type p1 i is at least her utility when she declares any other type p2 i .\nProof.\nLet c1 i , c2 i denote i``s total cost, defined as the negative of her utility, when she declares p1 , and p2 , respectively (and the others declare p\u2212i).\nSince p\u2212i is fixed, we omit p\u2212i from the expressions below for notational clarity.\nThe true load of i when she declares her true type p1 i is L \u00b7 ni p1,T \u2217(p1) + H \u00b7 Hi (p1 ), and therefore c1 i = L \u00b7 np1,T \u2217(p1) + H \u00b7 (n \u2212 np1,T \u2217(p1)) + (H \u2212 L) ` np1,T \u2217(p1) \u2212 np1,T L i \u00b4 = n \u00b7 H \u2212 (H \u2212 L)np1,T L i (11) On the other hand, i``s true load when she declares p2 i is L \u00b7 (ni p2,T \u2217(p2) \u2212 n2,1 H + n2,1 L ) + H \u00b7 (Hi + n2,1 H \u2212 n2,1 L ) (since i``s true processing time vector is p1 i ), and thus c2 i = n \u00b7 H \u2212 (H \u2212 L)np2,T L i + (H \u2212 L)n2,1 H \u2212 (H \u2212 L)n2,1 L .\nThus, (7) implies that c1 i \u2264 c2 i .\nPrice specifications are commonly required to satisfy, in addition to truthfulness, individual rationality, i.e., a player``s utility should be non-negative if she reveals her true value.\nThe payments given by (10) are not individually rational as they actually charge a player a certain amount.\nHowever, it is well-known that this problem can be easily solved by adding a large-enough constant to the price definition.\nIn our case, for example, letting H denote the vector of all H``s, we can add the term n\u00b7H \u2212(H \u2212L)n(H,p\u2212i),T L i (p\u2212i) to (10).\nNote that this is a constant for player i. Thus, the new payments are Pi (p) = n \u00b7 H \u2212 L \u00b7 n\u2212i p,T \u2217(p) \u2212 H \u00b7 H\u2212i (p) \u2212 (H \u2212L) ` np,T \u2217(p) \u2212np,T L i (p\u2212i) +n(H,p\u2212i),T L i (p\u2212i) \u00b4 .\nAs shown by (11), this will indeed result in a non-negative utility for i (since n(H,p\u2212i),T L i (p\u2212i) \u2264 n(pi,p\u2212i),T L i (p\u2212i) for any type pi of player i).\nThis modification also ensures the additionally desired normalization property that if a player receives no jobs then she receives zero payment: if player i receives the empty set for some type pi then she will also receive the empty set for the type H (this is easy to verify for our specific algorithm), and for the type H, her utility equals zero; thus, by truthfulness this must also be the utility of every other declaration that results in i receiving the empty set.\nThis completes the proof of Theorem 5.3.\n260 5.4 Impossibility of exact implementation We now show that irrespective of computational considerations, there does not exist a cycle-monotone algorithm for the L-H case with an approximation ratio better than 1.14.\nLet H = \u03b1\u00b7L for some 2 < \u03b1 < 2.5 that we will choose later.\nThere are two machines I, II and seven jobs.\nConsider the following two scenarios: Scenario 1.\nEvery job has the same processing time on both machines: jobs 1-5, are L, and jobs 6, 7 are H. Any optimal schedule assigns jobs 1-5 to one machine and jobs 6, 7 to the other, and has makespan OPT1 = 5L.\nThe secondbest schedule has makespan at least Second1 = 2H + L. Scenario 2.\nIf the algorithm chooses an optimal schedule for scenario 1, assume without loss of generality that jobs 6, 7 are assigned to machine II.\nIn scenario 2, machine I has the same processing-time vector.\nMachine II lowers jobs 6, 7 to L and increases 1-5 to H.\nAn optimal schedule has makespan 2L + H, where machine II gets jobs 6, 7 and one of the jobs 1-5.\nThe second-best schedule for this scenario has makespan at least Second2 = 5L.\nTheorem 5.14 No deterministic truthful mechanism for the two-value scheduling problem can obtain an approximation ratio better than 1.14.\nProof.\nWe first argue that a cycle-monotone algorithm cannot choose the optimal schedule in both scenarios.\nThis follows because otherwise cycle monotonicity is violated for machine II.\nTaking p1 II , p2 II to be machine II``s processingtime vectors for scenarios 1, 2 respectively, we get P j(p1 II ,j \u2212 p2 II ,j)(x2 II ,j \u2212x1 II ,j) = (L\u2212H)(1\u22120) < 0.\nThus, any truthful mechanism must return a sub-optimal makespan in at least one scenario, and therefore its approximation ratio is at least min \u02d8Second1 OPT1 , Second2 OPT2 \u00af \u2265 1.14 for \u03b1 = 2.364.\nWe remark that for the {Lj, Hj}-case where there is a common ratio r = Hj Lj for all jobs (this generalizes the restricted-machines setting) one can obtain a fractional truthful mechanism (with efficiently computable prices) that returns a schedule of makespan at most OPT(p) for every p.\nOne can view each job j as consisting of Lj sub-jobs of size 1 on a machine i if pij = Lj, and size r if pij = Hj.\nFor this new instance \u02dcp, note that \u02dcpij \u2208 {1, r} for every i, j. Notice also that any assignment \u02dcx for the instance \u02dcp translates to a fractional assignment x for p, where pijxij =P j : sub-job of j \u02dcpij \u02dcxij .\nThus, if we use Algorithm 2 to obtain a schedule for the instance \u02dcp, equation (6) translates precisely to (3) for the assignment x; moreover, the prices for \u02dcp translate to prices for the instance p.\nThe number of sub-jobs assigned to low-machines in the flow-phase is simply the total work assigned to low-machines.\nThus, we can implement the above reduction by setting up a max-flow problem that seems to maximize the total work assigned to low machines.\nMoreover, since we have a fractional domain, we can use a more efficient greedy rule for packing the unassigned portions of jobs and argue that the fractional assignment has makespan at most OPT(p).\nThe assignment x need not however satisfy the condition that xij > 0 implies pij \u2264 OPT(p) for arbitrary r, therefore, the rounding procedure of Lemma 4.2 does not yield a 2-approximation truthful-in-expectation mechanism.\nBut if r > OPT(p) (as in the restricted-machines setting), this condition does hold, so we get a 2-approximation truthful mechanism.\nAcknowledgments We thank Elias Koutsoupias for his help in refining the analysis of the lower bound in Section 5.4, and the reviewers for their helpful comments.\n6.\nREFERENCES [1] N. Andelman, Y. Azar, and M. Sorani.\nTruthful approximation mechanisms for scheduling selfish related machines.\nIn Proc.\n22nd STACS, 69-82, 2005.\n[2] A. Archer.\nMechanisms for discrete optimization with rational agents.\nPhD thesis, Cornell University, 2004.\n[3] A. Archer and \u00b4E. Tardos.\nTruthful mechanisms for one-parameter agents.\nIn Proc.\n42nd FOCS, pages 482-491, 2001.\n[4] V. Auletta, R. De-Prisco, P. Penna, and G. Persiano.\nDeterministic truthful approximation mechanisms for scheduling related machines.\nIn Proc.\n21st STACS, pages 608-619, 2004.\n[5] I. Bez\u00b4akov\u00b4a and V. Dani.\nAllocating indivisible goods.\nIn ACM SIGecom Exchanges, 2005.\n[6] S. Bikhchandani, S. Chatterjee, R. Lavi, A. Mu``alem, N. Nisan, and A. Sen. Weak monotonicity characterizes deterministic dominant-strategy implementation.\nEconometrica, 74:1109-1132, 2006.\n[7] P. Briest, P. Krysta, and B. Vocking.\nApproximation techniques for utilitarian mechanism design.\nIn Proc.\n37th STOC, pages 39-48, 2005.\n[8] G. Christodoulou, E. Koutsoupias, and A. Vidali.\nA lower bound for scheduling mechanisms.\nIn Proc.\n18th SODA, pages 1163-1170, 2007.\n[9] E. Clarke.\nMultipart pricing of public goods.\nPublic Choice, 8:17-33, 1971.\n[10] T. Groves.\nIncentives in teams.\nEconometrica, 41:617-631, 1973.\n[11] H. Gui, R. Muller, and R. V. Vohra.\nCharacterizing dominant strategy mechanisms with multi-dimensional types, 2004.\nWorking paper.\n[12] L. A. Hall.\nApproximation algorithms for scheduling.\nIn D. Hochbaum, editor, Approximation Algorithms for NP-Hard Problems.\nPWS Publishing, MA, 1996.\n[13] A. Kov\u00b4acs.\nFast monotone 3-approximation algorithm for scheduling related machines.\nIn Proc.\n13th ESA, pages 616-627, 2005.\n[14] V. S. A. Kumar, M. V. Marathe, S. Parthasarathy, and A. Srinivasan.\nApproximation algorithms for scheduling on multiple machines.\nIn Proc.\n46th FOCS, pages 254-263, 2005.\n[15] R. Lavi, A. Mu``alem, and N. Nisan.\nTowards a characterization of truthful combinatorial auctions.\nIn Proc.\n44th FOCS, pages 574-583, 2003.\n[16] R. Lavi and C. Swamy.\nTruthful and near-optimal mechanism design via linear programming.\nIn Proc.\n46th FOCS, pages 595-604, 2005.\n[17] D. Lehmann, L. O``Callaghan, and Y. Shoham.\nTruth revelation in approximately efficient combinatorial auctions.\nJournal of the ACM, 49:577-602, 2002.\n[18] J. K. Lenstra, D. B. Shmoys, and \u00b4E. Tardos.\nApproximation algorithms for scheduling unrelated parallel machines.\nMath.\nProg., 46:259-271, 1990.\n[19] R. J. Lipton, E. Markakis, E. Mossel, and A. Saberi.\nOn approximately fair allocations of indivisible goods.\nIn Proc.\n5th EC, pages 125-131, 2004.\n[20] A. Mu``alem and M. Schapira.\nSetting lower bounds on truthfulness.\nIn Proc.\n18th SODA, 1143-1152, 2007.\n[21] R. Myerson.\nOptimal auction design.\nMathematics of Operations Research, 6:58-73, 1981.\n[22] N. Nisan and A. Ronen.\nAlgorithmic mechanism design.\nGames and Econ.\nBehavior, 35:166-196, 2001.\n[23] J. C. Rochet.\nA necessary and sufficient condition for rationalizability in a quasilinear context.\nJournal of Mathematical Economics, 16:191-200, 1987.\n[24] M. Saks and L. Yu.\nWeak monotonicity suffices for truthfulness on convex domains.\nIn Proc.\n6th EC, pages 286-293, 2005.\n[25] D. B. Shmoys and \u00b4E. Tardos.\nAn approximation algorithm for the generalized assignment problem.\nMathematical Programming, 62:461-474, 1993.\n[26] W. Vickrey.\nCounterspeculations, auctions, and competitive sealed tenders.\nJ. Finance, 16:8-37, 1961.\n261", "lvl-3": "Truthful Mechanism Design for Multi-Dimensional Scheduling via Cycle Monotonicity\nABSTRACT\nWe consider the problem of makespan minimization on m unrelated machines in the context of algorithmic mechanism design , where the machines are the strategic players .\nThis is a multidimensional scheduling domain , and the only known positive results for makespan minimization in such a domain are O ( m ) - approximation truthful mechanisms [ 22 , 20 ] .\nWe study a well-motivated special case of this problem , where the processing time of a job on each machine may either be `` low '' or `` high '' , and the low and high values are public and job-dependent .\nThis preserves the multidimensionality of the domain , and generalizes the restricted-machines ( i.e. , { pj , \u221e } ) setting in scheduling .\nWe give a general technique to convert any c-approximation algorithm to a 3capproximation truthful-in-expectation mechanism .\nThis is one of the few known results that shows how to export approximation algorithms for a multidimensional problem into truthful mechanisms in a black-box fashion .\nWhen the low and high values are the same for all jobs , we devise a deterministic 2-approximation truthful mechanism .\nThese are the first truthful mechanisms with non-trivial performance guarantees for a multidimensional scheduling domain .\nOur constructions are novel in two respects .\nFirst , we do not utilize or rely on explicit price definitions to prove truthfulness ; instead we design algorithms that satisfy cycle monotonicity .\nCycle monotonicity [ 23 ] is a necessary and sufficient condition for truthfulness , is a generalization of value monotonicity for multidimensional domains .\nHowever , whereas value monotonicity has been used extensively and successfully to design truthful mechanisms in singledimensional domains , ours is the first work that leverages cycle monotonicity in the multidimensional setting .\nSecond , our randomized mechanisms are obtained by first constructing a fractional truthful mechanism for a fractional relaxation of the problem , and then converting it into a truthfulin-expectation mechanism .\nThis builds upon a technique of [ 16 ] , and shows the usefulness of fractional mechanisms in truthful mechanism design .\n1 .\nINTRODUCTION\nMechanism design studies algorithmic constructions under the presence of strategic players who hold the inputs to the algorithm .\nAlgorithmic mechanism design has focused mainly on settings were the social planner or designer wishes to maximize the social welfare ( or equivalently , minimize social cost ) , or on auction settings where revenuemaximization is the main goal .\nAlternative optimization goals , such as those that incorporate fairness criteria ( which have been investigated algorithmically and in social choice theory ) , have received very little or no attention .\nIn this paper , we consider such an alternative goal in the context of machine scheduling , namely , makespan minimization .\nThere are n jobs or tasks that need to be assigned to m machines , where each job has to be assigned to exactly one machine .\nAssigning a job j to a machine i incurs a load ( cost ) of pij \u2265 0 on machine i , and the load of a machine is the sum of the loads incurred due to the jobs assigned to it ; the goal is to schedule the jobs so as to minimize the maximum load of a machine , which is termed the makespan of the schedule .\nMakespan minimization is a common objective in scheduling environments , and has been well studied algorithmically in both the Computer Science and Operations Research communities ( see , e.g. , the survey [ 12 ] ) .\nFollowing the work of Nisan and Ronen [ 22 ] , we consider each machine to be a strategic player or agent who privately knows its own processing time for each job , and may misrepresent these values in order to decrease its load ( which is its incurred cost ) .\nHence , we approach the problem via mechanism design : the social designer , who holds the set of jobs to be assigned , needs to specify , in addition to a schedule , suitable payments to the players in order to incentivize them to reveal their true processing times .\nSuch a mechanism is called a truthful mechanism .\nThe makespan-minimization objective is quite different from the classic goal of social-welfare maximization , where one wants to maximize the total welfare ( or minimize the total cost ) of all players .\nInstead , it\ncorresponds to maximizing the minimum welfare and the notion of max-min fairness , and appears to be a much harder problem from the viewpoint of mechanism design .\nIn particular , the celebrated VCG [ 26 , 9 , 10 ] family of mechanisms does not apply here , and we need to devise new techniques .\nThe possibility of constructing a truthful mechanism for makespan minimization is strongly related to assumptions on the players ' processing times , in particular , the `` dimensionality '' of the domain .\nNisan and Ronen considered the setting of unrelated machines where the pij values may be arbitrary .\nThis is a multidimensional domain , since a player 's private value is its entire vector of processing times ( pij ) j. Very few positive results are known for multidimensional domains in general , and the only positive results known for multidimensional scheduling are O ( m ) - approximation truthful mechanisms [ 22 , 20 ] .\nWe emphasize that regardless of computational considerations , even the existence of a truthful mechanism with a significantly better ( than m ) approximation ratio is not known for any such scheduling domain .\nOn the negative side , [ 22 ] showed that no truthful deterministic mechanism can achieve approximation ratio better than 2 , and strengthened this lower bound to m for two specific classes of deterministic mechanisms .\nRecently , [ 20 ] extended this lower bound to randomized mechanisms , and [ 8 ] improved the deterministic lower bound .\nIn stark contrast with the above state of affairs , much stronger ( and many more ) positive results are known for a special case of the unrelated machines problem , namely , the setting of related machines .\nHere , we have pij = pj/si for every i , j , where pj is public knowledge , and the speed si is the only private parameter of machine i .\nThis assumption makes the domain of players ' types single-dimensional .\nTruthfulness in such domains is equivalent to a convenient value-monotonicity condition [ 21 , 3 ] , which appears to make it significantly easier to design truthful mechanisms in such domains .\nArcher and Tardos [ 3 ] first considered the related machines setting and gave a randomized 3-approximation truthful-in-expectation mechanism .\nThe gap between the single-dimensional and multidimensional domains is perhaps best exemplified by the fact that [ 3 ] showed that there exists a truthful mechanism that always outputs an optimal schedule .\n( Recall that in the multidimensional unrelated machines setting , it is impossible to obtain a truthful mechanism with approximation ratio better than 2 . )\nVarious follow-up results [ 2 , 4 , 1 , 13 ] have strengthened the notion of truthfulness and/or improved the approximation ratio .\nSuch difficulties in moving from the single-dimensional to the multidimensional setting also arise in other mechanism design settings ( e.g. , combinatorial auctions ) .\nThus , in addition to the specific importance of scheduling in strategic environments , ideas from multidimensional scheduling may also have a bearing in the more general context of truthful mechanism design for multidimensional domains .\nIn this paper , we consider the makespan-minimization problem for a special case of unrelated machines , where the processing time of a job is either `` low '' or `` high '' on each machine .\nMore precisely , in our setting , pij \u2208 { Lj , Hj } for every i , j , where the Lj , Hj values are publicly known ( Lj - `` low '' , Hj - `` high '' ) .\nWe call this model the `` jobdependent two-values '' case .\nThis model generalizes the classic `` restricted machines '' setting , where pij \u2208 { Lj , \u221e } which has been well-studied algorithmically .\nA special case of our model is when Lj = L and Hj = H for all jobs j , which we denote simply as the `` two-values '' scheduling model .\nBoth of our domains are multidimensional , since the machines are unrelated : one job may be low on one machine and high on the other , while another job may follow the opposite pattern .\nThus , the private information of each machine is a vector specifying which jobs are low and high on it .\nThus , they retain the core property underlying the hardness of truthful mechanism design for unrelated machines , and by studying these special settings we hope to gain some insights that will be useful for tackling the general problem .\nOur Results and Techniques We present various positive results for our multidimensional scheduling domains .\nOur first result is a general method to convert any capproximation algorithm for the job-dependent two values setting into a 3c-approximation truthful-in-expectation mechanism .\nThis is one of the very few known results that use an approximation algorithm in a black-box fashion to obtain a truthful mechanism for a multidimensional problem .\nOur result implies that there exists a 3-approximation truthfulin-expectation mechanism for the Lj-Hj setting .\nInterestingly , the proof of truthfulness is not based on supplying explicit prices , and our construction does not necessarily yield efficiently-computable prices ( but the allocation rule is efficiently computable ) .\nOur second result applies to the twovalues setting ( Lj = L , Hj = H ) , for which we improve both the approximation ratio and strengthen the notion of truthfulness .\nWe obtain a deterministic 2-approximation truthful mechanism ( along with prices ) for this problem .\nThese are the first truthful mechanisms with non-trivial performance guarantees for a multidimensional scheduling domain .\nComplementing this , we observe that even this seemingly simple setting does not admit truthful mechanisms that return an optimal schedule ( unlike in the case of related machines ) .\nBy exploiting the multidimensionality of the domain , we prove that no truthful deterministic mechanism can obtain an approximation ratio better than 1.14 to the makespan ( irrespective of computational considerations ) .\nThe main technique , and one of the novelties , underlying our constructions and proofs , is that we do not rely on explicit price specifications in order to prove the truthfulness of our mechanisms .\nInstead we exploit certain algorithmic monotonicity conditions that characterize truthfulness to first design an implementable algorithm , i.e. , an algorithm for which prices ensuring truthfulness exist , and then find these prices ( by further delving into the proof of implementability ) .\nThis kind of analysis has been the method of choice in the design of truthful mechanisms for singledimensional domains , where value-monotonicity yields a convenient characterization enabling one to concentrate on the algorithmic side of the problem ( see , e.g. , [ 3 , 7 , 4 , 1 , 13 ] ) .\nBut for multidimensional domains , almost all positive results have relied on explicit price specifications in order to prove truthfulness ( an exception is the work on unknown single-minded players in combinatorial auctions [ 17 , 7 ] ) , a fact that yet again shows the gap in our understanding of multidimensional vs. single-dimensional domains .\nOur work is the first to leverage monotonicity conditions for truthful mechanism design in arbitrary domains .\nThe monotonicity condition we use , which is sometimes called cycle monotonicity , was first proposed by Rochet [ 23 ] ( see also [ 11 ] ) .\nIt is a generalization of value-monotonicity and completely characterizes truthfulness in every domain .\nOur methods and analyses demonstrate the potential benefits\nof this characterization , and show that cycle monotonicity can be effectively utilized to devise truthful mechanisms for multidimensional domains .\nConsider , for example , our first result showing that any c-approximation algorithm can be `` exported '' to a 3c-approximation truthful-in-expectation mechanism .\nAt the level of generality of an arbitrary approximation algorithm , it seems unlikely that one would be able to come up with prices to prove truthfulness of the constructed mechanism .\nBut , cycle monotonicity does allow us to prove such a statement .\nIn fact , some such condition based only on the underlying algorithm ( and not on the prices ) seems necessary to prove such a general statement .\nThe method for converting approximation algorithms into truthful mechanisms involves another novel idea .\nOur randomized mechanism is obtained by first constructing a truthful mechanism that returns a fractional schedule .\nMoving to a fractional domain allows us to `` plug-in '' truthfulness into the approximation algorithm in a rather simple fashion , while losing a factor of 2 in the approximation ratio .\nWe then use a suitable randomized rounding procedure to convert the fractional assignment into a random integral assignment .\nFor this , we use a recent rounding procedure of Kumar et al. [ 14 ] that is tailored for unrelated-machine scheduling .\nThis preserves truthfulness , but we lose another additive factor equal to the approximation ratio .\nOur construction uses and extends some observations of Lavi and Swamy [ 16 ] , and further demonstrates the benefits of fractional mechanisms in truthful mechanism design .\nRelated Work Nisan and Ronen [ 22 ] first considered the makespan-minimization problem for unrelated machines .\nThey gave an m-approximation positive result and proved various lower bounds .\nRecently , Mu'alem and Schapira [ 20 ] proved a lower bound of 2 on the approximation ratio achievable by truthful-in-expectation mechanisms , and Christodoulou , Koutsoupias , and Vidali [ 8 ] proved a ( 1 + \\ / 2 ) - lower bound for deterministic truthful mechanisms.Archer and Tardos [ 3 ] first considered the related-machines problem and gave a 3-approximation truthful-in-expectation mechanism .\nThis been improved in [ 2 , 4 , 1 , 13 ] to : a 2-approximation randomized mechanism [ 2 ] ; an FPTAS for any fixed number of machines given by Andelman , Azar and Sorani [ 1 ] , and a 3-approximation deterministic mechanism by Kov \u00b4 acs [ 13 ] .\nThe algorithmic problem ( i.e. , without requiring truthfulness ) of makespan-minimization on unrelated machines is well understood and various 2-approximation algorithms are known .\nLenstra , Shmoys and Tardos [ 18 ] gave the first such algorithm .\nShmoys and Tardos [ 25 ] later gave a 2approximation algorithm for the generalized assignment problem , a generalization where there is a cost cij for assigning a job j to a machine i , and the goal is to minimize the cost subject to a bound on the makespan .\nRecently , Kumar , Marathe , Parthasarathy , and Srinivasan [ 14 ] gave a randomized rounding algorithm that yields the same bounds .\nWe use their procedure in our randomized mechanism .\nThe characterization of truthfulness for arbitrary domains in terms of cycle monotonicity seems to have been first observed by Rochet [ 23 ] ( see also Gui et al. [ 11 ] ) .\nThis generalizes the value-monotonicity condition for single-dimensional domains which was given by Myerson [ 21 ] and rediscovered by [ 3 ] .\nAs mentioned earlier , this condition has been exploited numerous times to obtain truthful mechanisms for single-dimensional domains [ 3 , 7 , 4 , 1 , 13 ] .\nFor convex domains ( i.e. , each players ' set of private values is convex ) , it is known that cycle monotonicity is implied by a simpler condition , called weak monotonicity [ 15 , 6 , 24 ] .\nBut even this simpler condition has not found much application in truthful mechanism design for multidimensional problems .\nObjectives other than social-welfare maximization and revenue maximization have received very little attention in mechanism design .\nIn the context of combinatorial auctions , the problems of maximizing the minimum value received by a player , and computing an envy-minimizing allocation have been studied briefly .\nLavi , Mu'alem , and Nisan [ 15 ] showed that the former objective can not be implemented truthfully ; Bezakova and Dani [ 5 ] gave a 0.5-approximation mechanism for two players with additive valuations .\nLipton et al. [ 19 ] showed that the latter objective can not be implemented truthfully .\nThese lower bounds were strengthened in [ 20 ] .\n2 .\nPRELIMINARIES\n2.1 The scheduling domain\nIn our scheduling problem , we are given n jobs and m machines , and each job must be assigned to exactly one machine .\nIn the unrelated-machines setting , each machine i is characterized by a vector of processing times ( pij ) j , where pij E R \u2265 0 U { oo } denotes i 's processing time for job j with the value oo specifying that i can not process j .\nWe consider two special cases of this problem : 1 .\nThe job-dependent two-values case , where pij E { Lj , Hj } for every i , j , with Lj < Hj , and the values Lj , Hj are known .\nThis generalizes the classic scheduling model of restricted machines , where Hj = oo .\n2 .\nThe two-values case , which is a special case of above where Lj = L and Hj = H for all jobs j , i.e. , pij E { L , H } for every i , j .\nWe say that a job j is low on machine i if pij = Lj , and high if pij = Hj .\nWe will use the terms schedule and assignment interchangeably .\nWe represent a deterministic schedule by a vector x = ( xij ) i , j , where xij is 1 if job j is assigned to machine i , thus we have xij E { 0 , 1 } for every i , j , Pi xij = 1 for every job j .\nWe will also consider randomized algorithms and algorithms that return a fractional assignment .\nIn both these settings , we will again specify an assignment by a vector x = ( xij ) i , j with Pj xij = 1 , but now xij E [ 0 , 1 ] for every i , j. For a randomized algorithm , xij is simply the probability that j is assigned to i ( thus , x is a convex combination of integer assignments ) .\nWe denote the load of machine i ( under a given assignj xijpij , and the makespan of a schedule is defined as the maximum load on any machine , i.e. , maxi li .\nThe goal in the makespan-minimization problem is to assign the jobs to the machines so as to minimize the makespan of the schedule .\n2.2 Mechanism design\nWe consider the makespan-minimization problem in the above scheduling domains in the context of mechanism design .\nMechanism design studies strategic settings where the social designer needs to ensure the cooperation of the different entities involved in the algorithmic procedure .\nFollowing the work of Nisan and Ronen [ 22 ] , we consider the machines to be the strategic players or agents .\nThe social designer holds the set of jobs that need to be assigned , but does\nnot know the ( true ) processing times of these jobs on the different machines .\nEach machine is a selfish entity , that privately knows its own processing time for each job .\non a machine incurs a cost to the machine equal to the true processing time of the job on the machine , and a machine may choose to misrepresent its vector of processing times , which are private , in order to decrease its cost .\nWe consider direct-revelation mechanisms : each machine reports its ( possibly false ) vector of processing times , the mechanism then computes a schedule and hands out payments to the players ( i.e. , machines ) to compensate them for the cost they incur in processing their assigned jobs .\nA ( direct-revelation ) mechanism thus consists of a tuple ( x , P ) : x specifies the schedule , and P = { Pi } specifies the payments handed out to the machines , where both x and the Pis are functions of the reported processing times p = ( pij ) i , j .\nThe mechanism 's goal is to compute a schedule that has near-optimal makespan with respect to the true processing times ; a machine i is however only interested in maximizing its own utility , Pi \u2212 li , where li is its load under the output assignment , and may declare false processing times if this could increase its utility .\nThe mechanism must therefore incentivize the machines/players to truthfully reveal their processing times via the payments .\nThis is made precise using the notion of dominant-strategy truthfulness .\nwhere ( x1 , P1 ) and ( x2 , P2 ) are respectively the schedule and payments when the other machines declare p \u2212 i and machine i declares p1i and p2i , i.e. , x1 = x ( p1i , p \u2212 i ) , Pi1 = Pi ( p1i , p \u2212 i ) and x2 = x ( p2 i , p \u2212 i ) , Pi 2 = Pi ( p2 i , p \u2212 i ) .\nTo put it in words , in a truthful mechanism , no machine can improve its utility by declaring a false processing time , no matter what the other machines declare .\nWe will also consider fractional mechanisms that return a fractional assignment , and randomized mechanisms that are allowed to toss coins and where the assignment and the payments may be random variables .\nThe notion of truthfulness for a fractional mechanism is the same as in Definition 2.1 , where x1 , x2 are now fractional assignments .\nFor a randomized mechanism , we will consider the notion of truthfulness in expectation [ 3 ] , which means that a machine ( player ) maximizes her expected utility by declaring her true processing-time vector .\nInequality ( 1 ) also defines truthfulness-in-expectation for a randomized mechanism , where Pi1 , Pi2 now denote the expected payments made to player i , x1 , x2 are the fractional assignments denoting the randomized algorithm 's schedule ( i.e. , xkij is the probability that j is assigned to i in the schedule output for ( pki , p \u2212 i ) ) .\nFor our two scheduling domains , the informational assumption is that the values Lj , Hj are publicly known .\nThe private information of a machine is which jobs have value Lj ( or L ) and which ones have value Hj ( or H ) on it .\nWe emphasize that both of our domains are multidimensional , since each machine i needs to specify a vector saying which jobs are low and high on it .\n3 .\nCYCLE MONOTONICITY\n4 .\nA GENERAL TECHNIQUE TO OBTAIN RANDOMIZED MECHANISMS\n5 .\nA DETERMINISTIC MECHANISM FOR THE TWO-VALUES CASE\n5.1 A cycle-monotone approximation algorithm\n5.2 Analysis\n5.2.1 Proof of approximation Claim 5.4 If OPT ( p ) < H , the makespan is at most OPT ( p ) .\n5.2.2 Proof of cycle monotonicity\n5.3 Computation of prices\n5.4 Impossibility of exact implementation", "lvl-4": "Truthful Mechanism Design for Multi-Dimensional Scheduling via Cycle Monotonicity\nABSTRACT\nWe consider the problem of makespan minimization on m unrelated machines in the context of algorithmic mechanism design , where the machines are the strategic players .\nThis is a multidimensional scheduling domain , and the only known positive results for makespan minimization in such a domain are O ( m ) - approximation truthful mechanisms [ 22 , 20 ] .\nWe study a well-motivated special case of this problem , where the processing time of a job on each machine may either be `` low '' or `` high '' , and the low and high values are public and job-dependent .\nThis preserves the multidimensionality of the domain , and generalizes the restricted-machines ( i.e. , { pj , \u221e } ) setting in scheduling .\nWe give a general technique to convert any c-approximation algorithm to a 3capproximation truthful-in-expectation mechanism .\nThis is one of the few known results that shows how to export approximation algorithms for a multidimensional problem into truthful mechanisms in a black-box fashion .\nWhen the low and high values are the same for all jobs , we devise a deterministic 2-approximation truthful mechanism .\nThese are the first truthful mechanisms with non-trivial performance guarantees for a multidimensional scheduling domain .\nOur constructions are novel in two respects .\nFirst , we do not utilize or rely on explicit price definitions to prove truthfulness ; instead we design algorithms that satisfy cycle monotonicity .\nCycle monotonicity [ 23 ] is a necessary and sufficient condition for truthfulness , is a generalization of value monotonicity for multidimensional domains .\nHowever , whereas value monotonicity has been used extensively and successfully to design truthful mechanisms in singledimensional domains , ours is the first work that leverages cycle monotonicity in the multidimensional setting .\nSecond , our randomized mechanisms are obtained by first constructing a fractional truthful mechanism for a fractional relaxation of the problem , and then converting it into a truthfulin-expectation mechanism .\nThis builds upon a technique of [ 16 ] , and shows the usefulness of fractional mechanisms in truthful mechanism design .\n1 .\nINTRODUCTION\nMechanism design studies algorithmic constructions under the presence of strategic players who hold the inputs to the algorithm .\nAlgorithmic mechanism design has focused mainly on settings were the social planner or designer wishes to maximize the social welfare ( or equivalently , minimize social cost ) , or on auction settings where revenuemaximization is the main goal .\nIn this paper , we consider such an alternative goal in the context of machine scheduling , namely , makespan minimization .\nThere are n jobs or tasks that need to be assigned to m machines , where each job has to be assigned to exactly one machine .\nHence , we approach the problem via mechanism design : the social designer , who holds the set of jobs to be assigned , needs to specify , in addition to a schedule , suitable payments to the players in order to incentivize them to reveal their true processing times .\nSuch a mechanism is called a truthful mechanism .\nInstead , it\ncorresponds to maximizing the minimum welfare and the notion of max-min fairness , and appears to be a much harder problem from the viewpoint of mechanism design .\nIn particular , the celebrated VCG [ 26 , 9 , 10 ] family of mechanisms does not apply here , and we need to devise new techniques .\nThe possibility of constructing a truthful mechanism for makespan minimization is strongly related to assumptions on the players ' processing times , in particular , the `` dimensionality '' of the domain .\nNisan and Ronen considered the setting of unrelated machines where the pij values may be arbitrary .\nThis is a multidimensional domain , since a player 's private value is its entire vector of processing times ( pij ) j. Very few positive results are known for multidimensional domains in general , and the only positive results known for multidimensional scheduling are O ( m ) - approximation truthful mechanisms [ 22 , 20 ] .\nWe emphasize that regardless of computational considerations , even the existence of a truthful mechanism with a significantly better ( than m ) approximation ratio is not known for any such scheduling domain .\nOn the negative side , [ 22 ] showed that no truthful deterministic mechanism can achieve approximation ratio better than 2 , and strengthened this lower bound to m for two specific classes of deterministic mechanisms .\nRecently , [ 20 ] extended this lower bound to randomized mechanisms , and [ 8 ] improved the deterministic lower bound .\nIn stark contrast with the above state of affairs , much stronger ( and many more ) positive results are known for a special case of the unrelated machines problem , namely , the setting of related machines .\nHere , we have pij = pj/si for every i , j , where pj is public knowledge , and the speed si is the only private parameter of machine i .\nThis assumption makes the domain of players ' types single-dimensional .\nTruthfulness in such domains is equivalent to a convenient value-monotonicity condition [ 21 , 3 ] , which appears to make it significantly easier to design truthful mechanisms in such domains .\nArcher and Tardos [ 3 ] first considered the related machines setting and gave a randomized 3-approximation truthful-in-expectation mechanism .\nThe gap between the single-dimensional and multidimensional domains is perhaps best exemplified by the fact that [ 3 ] showed that there exists a truthful mechanism that always outputs an optimal schedule .\n( Recall that in the multidimensional unrelated machines setting , it is impossible to obtain a truthful mechanism with approximation ratio better than 2 . )\nVarious follow-up results [ 2 , 4 , 1 , 13 ] have strengthened the notion of truthfulness and/or improved the approximation ratio .\nSuch difficulties in moving from the single-dimensional to the multidimensional setting also arise in other mechanism design settings ( e.g. , combinatorial auctions ) .\nThus , in addition to the specific importance of scheduling in strategic environments , ideas from multidimensional scheduling may also have a bearing in the more general context of truthful mechanism design for multidimensional domains .\nIn this paper , we consider the makespan-minimization problem for a special case of unrelated machines , where the processing time of a job is either `` low '' or `` high '' on each machine .\nWe call this model the `` jobdependent two-values '' case .\nThis model generalizes the classic `` restricted machines '' setting , where pij \u2208 { Lj , \u221e } which has been well-studied algorithmically .\nA special case of our model is when Lj = L and Hj = H for all jobs j , which we denote simply as the `` two-values '' scheduling model .\nBoth of our domains are multidimensional , since the machines are unrelated : one job may be low on one machine and high on the other , while another job may follow the opposite pattern .\nThus , the private information of each machine is a vector specifying which jobs are low and high on it .\nThus , they retain the core property underlying the hardness of truthful mechanism design for unrelated machines , and by studying these special settings we hope to gain some insights that will be useful for tackling the general problem .\nOur Results and Techniques We present various positive results for our multidimensional scheduling domains .\nOur first result is a general method to convert any capproximation algorithm for the job-dependent two values setting into a 3c-approximation truthful-in-expectation mechanism .\nThis is one of the very few known results that use an approximation algorithm in a black-box fashion to obtain a truthful mechanism for a multidimensional problem .\nOur result implies that there exists a 3-approximation truthfulin-expectation mechanism for the Lj-Hj setting .\nOur second result applies to the twovalues setting ( Lj = L , Hj = H ) , for which we improve both the approximation ratio and strengthen the notion of truthfulness .\nWe obtain a deterministic 2-approximation truthful mechanism ( along with prices ) for this problem .\nThese are the first truthful mechanisms with non-trivial performance guarantees for a multidimensional scheduling domain .\nComplementing this , we observe that even this seemingly simple setting does not admit truthful mechanisms that return an optimal schedule ( unlike in the case of related machines ) .\nBy exploiting the multidimensionality of the domain , we prove that no truthful deterministic mechanism can obtain an approximation ratio better than 1.14 to the makespan ( irrespective of computational considerations ) .\nThe main technique , and one of the novelties , underlying our constructions and proofs , is that we do not rely on explicit price specifications in order to prove the truthfulness of our mechanisms .\nInstead we exploit certain algorithmic monotonicity conditions that characterize truthfulness to first design an implementable algorithm , i.e. , an algorithm for which prices ensuring truthfulness exist , and then find these prices ( by further delving into the proof of implementability ) .\nThis kind of analysis has been the method of choice in the design of truthful mechanisms for singledimensional domains , where value-monotonicity yields a convenient characterization enabling one to concentrate on the algorithmic side of the problem ( see , e.g. , [ 3 , 7 , 4 , 1 , 13 ] ) .\nOur work is the first to leverage monotonicity conditions for truthful mechanism design in arbitrary domains .\nThe monotonicity condition we use , which is sometimes called cycle monotonicity , was first proposed by Rochet [ 23 ] ( see also [ 11 ] ) .\nIt is a generalization of value-monotonicity and completely characterizes truthfulness in every domain .\nOur methods and analyses demonstrate the potential benefits\nof this characterization , and show that cycle monotonicity can be effectively utilized to devise truthful mechanisms for multidimensional domains .\nConsider , for example , our first result showing that any c-approximation algorithm can be `` exported '' to a 3c-approximation truthful-in-expectation mechanism .\nAt the level of generality of an arbitrary approximation algorithm , it seems unlikely that one would be able to come up with prices to prove truthfulness of the constructed mechanism .\nBut , cycle monotonicity does allow us to prove such a statement .\nIn fact , some such condition based only on the underlying algorithm ( and not on the prices ) seems necessary to prove such a general statement .\nThe method for converting approximation algorithms into truthful mechanisms involves another novel idea .\nOur randomized mechanism is obtained by first constructing a truthful mechanism that returns a fractional schedule .\nMoving to a fractional domain allows us to `` plug-in '' truthfulness into the approximation algorithm in a rather simple fashion , while losing a factor of 2 in the approximation ratio .\nWe then use a suitable randomized rounding procedure to convert the fractional assignment into a random integral assignment .\nThis preserves truthfulness , but we lose another additive factor equal to the approximation ratio .\nOur construction uses and extends some observations of Lavi and Swamy [ 16 ] , and further demonstrates the benefits of fractional mechanisms in truthful mechanism design .\nRelated Work Nisan and Ronen [ 22 ] first considered the makespan-minimization problem for unrelated machines .\nThey gave an m-approximation positive result and proved various lower bounds .\nThis been improved in [ 2 , 4 , 1 , 13 ] to : a 2-approximation randomized mechanism [ 2 ] ; an FPTAS for any fixed number of machines given by Andelman , Azar and Sorani [ 1 ] , and a 3-approximation deterministic mechanism by Kov \u00b4 acs [ 13 ] .\nThe algorithmic problem ( i.e. , without requiring truthfulness ) of makespan-minimization on unrelated machines is well understood and various 2-approximation algorithms are known .\nLenstra , Shmoys and Tardos [ 18 ] gave the first such algorithm .\nShmoys and Tardos [ 25 ] later gave a 2approximation algorithm for the generalized assignment problem , a generalization where there is a cost cij for assigning a job j to a machine i , and the goal is to minimize the cost subject to a bound on the makespan .\nRecently , Kumar , Marathe , Parthasarathy , and Srinivasan [ 14 ] gave a randomized rounding algorithm that yields the same bounds .\nWe use their procedure in our randomized mechanism .\nThe characterization of truthfulness for arbitrary domains in terms of cycle monotonicity seems to have been first observed by Rochet [ 23 ] ( see also Gui et al. [ 11 ] ) .\nThis generalizes the value-monotonicity condition for single-dimensional domains which was given by Myerson [ 21 ] and rediscovered by [ 3 ] .\nAs mentioned earlier , this condition has been exploited numerous times to obtain truthful mechanisms for single-dimensional domains [ 3 , 7 , 4 , 1 , 13 ] .\nFor convex domains ( i.e. , each players ' set of private values is convex ) , it is known that cycle monotonicity is implied by a simpler condition , called weak monotonicity [ 15 , 6 , 24 ] .\nBut even this simpler condition has not found much application in truthful mechanism design for multidimensional problems .\nObjectives other than social-welfare maximization and revenue maximization have received very little attention in mechanism design .\nIn the context of combinatorial auctions , the problems of maximizing the minimum value received by a player , and computing an envy-minimizing allocation have been studied briefly .\nLavi , Mu'alem , and Nisan [ 15 ] showed that the former objective can not be implemented truthfully ; Bezakova and Dani [ 5 ] gave a 0.5-approximation mechanism for two players with additive valuations .\nThese lower bounds were strengthened in [ 20 ] .\n2 .\nPRELIMINARIES\n2.1 The scheduling domain\nIn our scheduling problem , we are given n jobs and m machines , and each job must be assigned to exactly one machine .\nIn the unrelated-machines setting , each machine i is characterized by a vector of processing times ( pij ) j , where pij E R \u2265 0 U { oo } denotes i 's processing time for job j with the value oo specifying that i can not process j .\nWe consider two special cases of this problem : 1 .\nThe job-dependent two-values case , where pij E { Lj , Hj } for every i , j , with Lj < Hj , and the values Lj , Hj are known .\nThis generalizes the classic scheduling model of restricted machines , where Hj = oo .\n2 .\nWe say that a job j is low on machine i if pij = Lj , and high if pij = Hj .\nWe will use the terms schedule and assignment interchangeably .\nWe will also consider randomized algorithms and algorithms that return a fractional assignment .\nWe denote the load of machine i ( under a given assignj xijpij , and the makespan of a schedule is defined as the maximum load on any machine , i.e. , maxi li .\nThe goal in the makespan-minimization problem is to assign the jobs to the machines so as to minimize the makespan of the schedule .\n2.2 Mechanism design\nWe consider the makespan-minimization problem in the above scheduling domains in the context of mechanism design .\nMechanism design studies strategic settings where the social designer needs to ensure the cooperation of the different entities involved in the algorithmic procedure .\nFollowing the work of Nisan and Ronen [ 22 ] , we consider the machines to be the strategic players or agents .\nThe social designer holds the set of jobs that need to be assigned , but does\nnot know the ( true ) processing times of these jobs on the different machines .\nEach machine is a selfish entity , that privately knows its own processing time for each job .\nWe consider direct-revelation mechanisms : each machine reports its ( possibly false ) vector of processing times , the mechanism then computes a schedule and hands out payments to the players ( i.e. , machines ) to compensate them for the cost they incur in processing their assigned jobs .\nA ( direct-revelation ) mechanism thus consists of a tuple ( x , P ) : x specifies the schedule , and P = { Pi } specifies the payments handed out to the machines , where both x and the Pis are functions of the reported processing times p = ( pij ) i , j .\nThe mechanism must therefore incentivize the machines/players to truthfully reveal their processing times via the payments .\nThis is made precise using the notion of dominant-strategy truthfulness .\nTo put it in words , in a truthful mechanism , no machine can improve its utility by declaring a false processing time , no matter what the other machines declare .\nWe will also consider fractional mechanisms that return a fractional assignment , and randomized mechanisms that are allowed to toss coins and where the assignment and the payments may be random variables .\nThe notion of truthfulness for a fractional mechanism is the same as in Definition 2.1 , where x1 , x2 are now fractional assignments .\nFor a randomized mechanism , we will consider the notion of truthfulness in expectation [ 3 ] , which means that a machine ( player ) maximizes her expected utility by declaring her true processing-time vector .\nFor our two scheduling domains , the informational assumption is that the values Lj , Hj are publicly known .\nThe private information of a machine is which jobs have value Lj ( or L ) and which ones have value Hj ( or H ) on it .\nWe emphasize that both of our domains are multidimensional , since each machine i needs to specify a vector saying which jobs are low and high on it .", "lvl-2": "Truthful Mechanism Design for Multi-Dimensional Scheduling via Cycle Monotonicity\nABSTRACT\nWe consider the problem of makespan minimization on m unrelated machines in the context of algorithmic mechanism design , where the machines are the strategic players .\nThis is a multidimensional scheduling domain , and the only known positive results for makespan minimization in such a domain are O ( m ) - approximation truthful mechanisms [ 22 , 20 ] .\nWe study a well-motivated special case of this problem , where the processing time of a job on each machine may either be `` low '' or `` high '' , and the low and high values are public and job-dependent .\nThis preserves the multidimensionality of the domain , and generalizes the restricted-machines ( i.e. , { pj , \u221e } ) setting in scheduling .\nWe give a general technique to convert any c-approximation algorithm to a 3capproximation truthful-in-expectation mechanism .\nThis is one of the few known results that shows how to export approximation algorithms for a multidimensional problem into truthful mechanisms in a black-box fashion .\nWhen the low and high values are the same for all jobs , we devise a deterministic 2-approximation truthful mechanism .\nThese are the first truthful mechanisms with non-trivial performance guarantees for a multidimensional scheduling domain .\nOur constructions are novel in two respects .\nFirst , we do not utilize or rely on explicit price definitions to prove truthfulness ; instead we design algorithms that satisfy cycle monotonicity .\nCycle monotonicity [ 23 ] is a necessary and sufficient condition for truthfulness , is a generalization of value monotonicity for multidimensional domains .\nHowever , whereas value monotonicity has been used extensively and successfully to design truthful mechanisms in singledimensional domains , ours is the first work that leverages cycle monotonicity in the multidimensional setting .\nSecond , our randomized mechanisms are obtained by first constructing a fractional truthful mechanism for a fractional relaxation of the problem , and then converting it into a truthfulin-expectation mechanism .\nThis builds upon a technique of [ 16 ] , and shows the usefulness of fractional mechanisms in truthful mechanism design .\n1 .\nINTRODUCTION\nMechanism design studies algorithmic constructions under the presence of strategic players who hold the inputs to the algorithm .\nAlgorithmic mechanism design has focused mainly on settings were the social planner or designer wishes to maximize the social welfare ( or equivalently , minimize social cost ) , or on auction settings where revenuemaximization is the main goal .\nAlternative optimization goals , such as those that incorporate fairness criteria ( which have been investigated algorithmically and in social choice theory ) , have received very little or no attention .\nIn this paper , we consider such an alternative goal in the context of machine scheduling , namely , makespan minimization .\nThere are n jobs or tasks that need to be assigned to m machines , where each job has to be assigned to exactly one machine .\nAssigning a job j to a machine i incurs a load ( cost ) of pij \u2265 0 on machine i , and the load of a machine is the sum of the loads incurred due to the jobs assigned to it ; the goal is to schedule the jobs so as to minimize the maximum load of a machine , which is termed the makespan of the schedule .\nMakespan minimization is a common objective in scheduling environments , and has been well studied algorithmically in both the Computer Science and Operations Research communities ( see , e.g. , the survey [ 12 ] ) .\nFollowing the work of Nisan and Ronen [ 22 ] , we consider each machine to be a strategic player or agent who privately knows its own processing time for each job , and may misrepresent these values in order to decrease its load ( which is its incurred cost ) .\nHence , we approach the problem via mechanism design : the social designer , who holds the set of jobs to be assigned , needs to specify , in addition to a schedule , suitable payments to the players in order to incentivize them to reveal their true processing times .\nSuch a mechanism is called a truthful mechanism .\nThe makespan-minimization objective is quite different from the classic goal of social-welfare maximization , where one wants to maximize the total welfare ( or minimize the total cost ) of all players .\nInstead , it\ncorresponds to maximizing the minimum welfare and the notion of max-min fairness , and appears to be a much harder problem from the viewpoint of mechanism design .\nIn particular , the celebrated VCG [ 26 , 9 , 10 ] family of mechanisms does not apply here , and we need to devise new techniques .\nThe possibility of constructing a truthful mechanism for makespan minimization is strongly related to assumptions on the players ' processing times , in particular , the `` dimensionality '' of the domain .\nNisan and Ronen considered the setting of unrelated machines where the pij values may be arbitrary .\nThis is a multidimensional domain , since a player 's private value is its entire vector of processing times ( pij ) j. Very few positive results are known for multidimensional domains in general , and the only positive results known for multidimensional scheduling are O ( m ) - approximation truthful mechanisms [ 22 , 20 ] .\nWe emphasize that regardless of computational considerations , even the existence of a truthful mechanism with a significantly better ( than m ) approximation ratio is not known for any such scheduling domain .\nOn the negative side , [ 22 ] showed that no truthful deterministic mechanism can achieve approximation ratio better than 2 , and strengthened this lower bound to m for two specific classes of deterministic mechanisms .\nRecently , [ 20 ] extended this lower bound to randomized mechanisms , and [ 8 ] improved the deterministic lower bound .\nIn stark contrast with the above state of affairs , much stronger ( and many more ) positive results are known for a special case of the unrelated machines problem , namely , the setting of related machines .\nHere , we have pij = pj/si for every i , j , where pj is public knowledge , and the speed si is the only private parameter of machine i .\nThis assumption makes the domain of players ' types single-dimensional .\nTruthfulness in such domains is equivalent to a convenient value-monotonicity condition [ 21 , 3 ] , which appears to make it significantly easier to design truthful mechanisms in such domains .\nArcher and Tardos [ 3 ] first considered the related machines setting and gave a randomized 3-approximation truthful-in-expectation mechanism .\nThe gap between the single-dimensional and multidimensional domains is perhaps best exemplified by the fact that [ 3 ] showed that there exists a truthful mechanism that always outputs an optimal schedule .\n( Recall that in the multidimensional unrelated machines setting , it is impossible to obtain a truthful mechanism with approximation ratio better than 2 . )\nVarious follow-up results [ 2 , 4 , 1 , 13 ] have strengthened the notion of truthfulness and/or improved the approximation ratio .\nSuch difficulties in moving from the single-dimensional to the multidimensional setting also arise in other mechanism design settings ( e.g. , combinatorial auctions ) .\nThus , in addition to the specific importance of scheduling in strategic environments , ideas from multidimensional scheduling may also have a bearing in the more general context of truthful mechanism design for multidimensional domains .\nIn this paper , we consider the makespan-minimization problem for a special case of unrelated machines , where the processing time of a job is either `` low '' or `` high '' on each machine .\nMore precisely , in our setting , pij \u2208 { Lj , Hj } for every i , j , where the Lj , Hj values are publicly known ( Lj - `` low '' , Hj - `` high '' ) .\nWe call this model the `` jobdependent two-values '' case .\nThis model generalizes the classic `` restricted machines '' setting , where pij \u2208 { Lj , \u221e } which has been well-studied algorithmically .\nA special case of our model is when Lj = L and Hj = H for all jobs j , which we denote simply as the `` two-values '' scheduling model .\nBoth of our domains are multidimensional , since the machines are unrelated : one job may be low on one machine and high on the other , while another job may follow the opposite pattern .\nThus , the private information of each machine is a vector specifying which jobs are low and high on it .\nThus , they retain the core property underlying the hardness of truthful mechanism design for unrelated machines , and by studying these special settings we hope to gain some insights that will be useful for tackling the general problem .\nOur Results and Techniques We present various positive results for our multidimensional scheduling domains .\nOur first result is a general method to convert any capproximation algorithm for the job-dependent two values setting into a 3c-approximation truthful-in-expectation mechanism .\nThis is one of the very few known results that use an approximation algorithm in a black-box fashion to obtain a truthful mechanism for a multidimensional problem .\nOur result implies that there exists a 3-approximation truthfulin-expectation mechanism for the Lj-Hj setting .\nInterestingly , the proof of truthfulness is not based on supplying explicit prices , and our construction does not necessarily yield efficiently-computable prices ( but the allocation rule is efficiently computable ) .\nOur second result applies to the twovalues setting ( Lj = L , Hj = H ) , for which we improve both the approximation ratio and strengthen the notion of truthfulness .\nWe obtain a deterministic 2-approximation truthful mechanism ( along with prices ) for this problem .\nThese are the first truthful mechanisms with non-trivial performance guarantees for a multidimensional scheduling domain .\nComplementing this , we observe that even this seemingly simple setting does not admit truthful mechanisms that return an optimal schedule ( unlike in the case of related machines ) .\nBy exploiting the multidimensionality of the domain , we prove that no truthful deterministic mechanism can obtain an approximation ratio better than 1.14 to the makespan ( irrespective of computational considerations ) .\nThe main technique , and one of the novelties , underlying our constructions and proofs , is that we do not rely on explicit price specifications in order to prove the truthfulness of our mechanisms .\nInstead we exploit certain algorithmic monotonicity conditions that characterize truthfulness to first design an implementable algorithm , i.e. , an algorithm for which prices ensuring truthfulness exist , and then find these prices ( by further delving into the proof of implementability ) .\nThis kind of analysis has been the method of choice in the design of truthful mechanisms for singledimensional domains , where value-monotonicity yields a convenient characterization enabling one to concentrate on the algorithmic side of the problem ( see , e.g. , [ 3 , 7 , 4 , 1 , 13 ] ) .\nBut for multidimensional domains , almost all positive results have relied on explicit price specifications in order to prove truthfulness ( an exception is the work on unknown single-minded players in combinatorial auctions [ 17 , 7 ] ) , a fact that yet again shows the gap in our understanding of multidimensional vs. single-dimensional domains .\nOur work is the first to leverage monotonicity conditions for truthful mechanism design in arbitrary domains .\nThe monotonicity condition we use , which is sometimes called cycle monotonicity , was first proposed by Rochet [ 23 ] ( see also [ 11 ] ) .\nIt is a generalization of value-monotonicity and completely characterizes truthfulness in every domain .\nOur methods and analyses demonstrate the potential benefits\nof this characterization , and show that cycle monotonicity can be effectively utilized to devise truthful mechanisms for multidimensional domains .\nConsider , for example , our first result showing that any c-approximation algorithm can be `` exported '' to a 3c-approximation truthful-in-expectation mechanism .\nAt the level of generality of an arbitrary approximation algorithm , it seems unlikely that one would be able to come up with prices to prove truthfulness of the constructed mechanism .\nBut , cycle monotonicity does allow us to prove such a statement .\nIn fact , some such condition based only on the underlying algorithm ( and not on the prices ) seems necessary to prove such a general statement .\nThe method for converting approximation algorithms into truthful mechanisms involves another novel idea .\nOur randomized mechanism is obtained by first constructing a truthful mechanism that returns a fractional schedule .\nMoving to a fractional domain allows us to `` plug-in '' truthfulness into the approximation algorithm in a rather simple fashion , while losing a factor of 2 in the approximation ratio .\nWe then use a suitable randomized rounding procedure to convert the fractional assignment into a random integral assignment .\nFor this , we use a recent rounding procedure of Kumar et al. [ 14 ] that is tailored for unrelated-machine scheduling .\nThis preserves truthfulness , but we lose another additive factor equal to the approximation ratio .\nOur construction uses and extends some observations of Lavi and Swamy [ 16 ] , and further demonstrates the benefits of fractional mechanisms in truthful mechanism design .\nRelated Work Nisan and Ronen [ 22 ] first considered the makespan-minimization problem for unrelated machines .\nThey gave an m-approximation positive result and proved various lower bounds .\nRecently , Mu'alem and Schapira [ 20 ] proved a lower bound of 2 on the approximation ratio achievable by truthful-in-expectation mechanisms , and Christodoulou , Koutsoupias , and Vidali [ 8 ] proved a ( 1 + \\ / 2 ) - lower bound for deterministic truthful mechanisms.Archer and Tardos [ 3 ] first considered the related-machines problem and gave a 3-approximation truthful-in-expectation mechanism .\nThis been improved in [ 2 , 4 , 1 , 13 ] to : a 2-approximation randomized mechanism [ 2 ] ; an FPTAS for any fixed number of machines given by Andelman , Azar and Sorani [ 1 ] , and a 3-approximation deterministic mechanism by Kov \u00b4 acs [ 13 ] .\nThe algorithmic problem ( i.e. , without requiring truthfulness ) of makespan-minimization on unrelated machines is well understood and various 2-approximation algorithms are known .\nLenstra , Shmoys and Tardos [ 18 ] gave the first such algorithm .\nShmoys and Tardos [ 25 ] later gave a 2approximation algorithm for the generalized assignment problem , a generalization where there is a cost cij for assigning a job j to a machine i , and the goal is to minimize the cost subject to a bound on the makespan .\nRecently , Kumar , Marathe , Parthasarathy , and Srinivasan [ 14 ] gave a randomized rounding algorithm that yields the same bounds .\nWe use their procedure in our randomized mechanism .\nThe characterization of truthfulness for arbitrary domains in terms of cycle monotonicity seems to have been first observed by Rochet [ 23 ] ( see also Gui et al. [ 11 ] ) .\nThis generalizes the value-monotonicity condition for single-dimensional domains which was given by Myerson [ 21 ] and rediscovered by [ 3 ] .\nAs mentioned earlier , this condition has been exploited numerous times to obtain truthful mechanisms for single-dimensional domains [ 3 , 7 , 4 , 1 , 13 ] .\nFor convex domains ( i.e. , each players ' set of private values is convex ) , it is known that cycle monotonicity is implied by a simpler condition , called weak monotonicity [ 15 , 6 , 24 ] .\nBut even this simpler condition has not found much application in truthful mechanism design for multidimensional problems .\nObjectives other than social-welfare maximization and revenue maximization have received very little attention in mechanism design .\nIn the context of combinatorial auctions , the problems of maximizing the minimum value received by a player , and computing an envy-minimizing allocation have been studied briefly .\nLavi , Mu'alem , and Nisan [ 15 ] showed that the former objective can not be implemented truthfully ; Bezakova and Dani [ 5 ] gave a 0.5-approximation mechanism for two players with additive valuations .\nLipton et al. [ 19 ] showed that the latter objective can not be implemented truthfully .\nThese lower bounds were strengthened in [ 20 ] .\n2 .\nPRELIMINARIES\n2.1 The scheduling domain\nIn our scheduling problem , we are given n jobs and m machines , and each job must be assigned to exactly one machine .\nIn the unrelated-machines setting , each machine i is characterized by a vector of processing times ( pij ) j , where pij E R \u2265 0 U { oo } denotes i 's processing time for job j with the value oo specifying that i can not process j .\nWe consider two special cases of this problem : 1 .\nThe job-dependent two-values case , where pij E { Lj , Hj } for every i , j , with Lj < Hj , and the values Lj , Hj are known .\nThis generalizes the classic scheduling model of restricted machines , where Hj = oo .\n2 .\nThe two-values case , which is a special case of above where Lj = L and Hj = H for all jobs j , i.e. , pij E { L , H } for every i , j .\nWe say that a job j is low on machine i if pij = Lj , and high if pij = Hj .\nWe will use the terms schedule and assignment interchangeably .\nWe represent a deterministic schedule by a vector x = ( xij ) i , j , where xij is 1 if job j is assigned to machine i , thus we have xij E { 0 , 1 } for every i , j , Pi xij = 1 for every job j .\nWe will also consider randomized algorithms and algorithms that return a fractional assignment .\nIn both these settings , we will again specify an assignment by a vector x = ( xij ) i , j with Pj xij = 1 , but now xij E [ 0 , 1 ] for every i , j. For a randomized algorithm , xij is simply the probability that j is assigned to i ( thus , x is a convex combination of integer assignments ) .\nWe denote the load of machine i ( under a given assignj xijpij , and the makespan of a schedule is defined as the maximum load on any machine , i.e. , maxi li .\nThe goal in the makespan-minimization problem is to assign the jobs to the machines so as to minimize the makespan of the schedule .\n2.2 Mechanism design\nWe consider the makespan-minimization problem in the above scheduling domains in the context of mechanism design .\nMechanism design studies strategic settings where the social designer needs to ensure the cooperation of the different entities involved in the algorithmic procedure .\nFollowing the work of Nisan and Ronen [ 22 ] , we consider the machines to be the strategic players or agents .\nThe social designer holds the set of jobs that need to be assigned , but does\nnot know the ( true ) processing times of these jobs on the different machines .\nEach machine is a selfish entity , that privately knows its own processing time for each job .\non a machine incurs a cost to the machine equal to the true processing time of the job on the machine , and a machine may choose to misrepresent its vector of processing times , which are private , in order to decrease its cost .\nWe consider direct-revelation mechanisms : each machine reports its ( possibly false ) vector of processing times , the mechanism then computes a schedule and hands out payments to the players ( i.e. , machines ) to compensate them for the cost they incur in processing their assigned jobs .\nA ( direct-revelation ) mechanism thus consists of a tuple ( x , P ) : x specifies the schedule , and P = { Pi } specifies the payments handed out to the machines , where both x and the Pis are functions of the reported processing times p = ( pij ) i , j .\nThe mechanism 's goal is to compute a schedule that has near-optimal makespan with respect to the true processing times ; a machine i is however only interested in maximizing its own utility , Pi \u2212 li , where li is its load under the output assignment , and may declare false processing times if this could increase its utility .\nThe mechanism must therefore incentivize the machines/players to truthfully reveal their processing times via the payments .\nThis is made precise using the notion of dominant-strategy truthfulness .\nwhere ( x1 , P1 ) and ( x2 , P2 ) are respectively the schedule and payments when the other machines declare p \u2212 i and machine i declares p1i and p2i , i.e. , x1 = x ( p1i , p \u2212 i ) , Pi1 = Pi ( p1i , p \u2212 i ) and x2 = x ( p2 i , p \u2212 i ) , Pi 2 = Pi ( p2 i , p \u2212 i ) .\nTo put it in words , in a truthful mechanism , no machine can improve its utility by declaring a false processing time , no matter what the other machines declare .\nWe will also consider fractional mechanisms that return a fractional assignment , and randomized mechanisms that are allowed to toss coins and where the assignment and the payments may be random variables .\nThe notion of truthfulness for a fractional mechanism is the same as in Definition 2.1 , where x1 , x2 are now fractional assignments .\nFor a randomized mechanism , we will consider the notion of truthfulness in expectation [ 3 ] , which means that a machine ( player ) maximizes her expected utility by declaring her true processing-time vector .\nInequality ( 1 ) also defines truthfulness-in-expectation for a randomized mechanism , where Pi1 , Pi2 now denote the expected payments made to player i , x1 , x2 are the fractional assignments denoting the randomized algorithm 's schedule ( i.e. , xkij is the probability that j is assigned to i in the schedule output for ( pki , p \u2212 i ) ) .\nFor our two scheduling domains , the informational assumption is that the values Lj , Hj are publicly known .\nThe private information of a machine is which jobs have value Lj ( or L ) and which ones have value Hj ( or H ) on it .\nWe emphasize that both of our domains are multidimensional , since each machine i needs to specify a vector saying which jobs are low and high on it .\n3 .\nCYCLE MONOTONICITY\nAlthough truthfulness is defined in terms of payments , it turns out that truthfulness actually boils down to a certain algorithmic condition of monotonicity .\nThis seems to have been first observed for multidimensional domains by Rochet [ 23 ] in 1987 , and has been used successfully in algorithmic mechanism design several times , but for singledimensional domains .\nHowever for multidimensional domains , the monotonicity condition is more involved and there has been no success in employing it in the design of truthful mechanisms .\nMost positive results for multidimensional domains have relied on explicit price specifications in order to prove truthfulness .\nOne of the main contributions of this paper is to demonstrate that the monotonicity condition for multidimensional settings , which is sometimes called cycle monotonicity , can indeed be effectively utilized to devise truthful mechanisms .\nWe include a brief exposition on it for completeness .\nThe exposition here is largely based on [ 11 ] .\nCycle monotonicity is best described in the abstract social choice setting : there is a finite set A of alternatives , there are m players , and each player has a private type ( valuation function ) v : A 7 \u2192 R , where vi ( a ) should be interpreted as i 's value for alternative a .\nIn the scheduling domain , A represents all the possible assignments of jobs to machines , and vi ( a ) is the negative of i 's load in the schedule a. Let Vi denote the set of all possible types of player i .\nA mechanism is a tuple ( f , { Pi } ) where f : V1 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 V. 7 \u2192 A is the `` algorithm '' for choosing the alternative , and Pi : V1 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 V. 7 \u2192 A is the price charged to player i ( in the scheduling setting , the mechanism pays the players , which corresponds to negative prices ) .\nThe mechanism is truthful if for every i , every v \u2212 i \u2208 V \u2212 i = Qi ,6 = i Vi , and any vi , v0i \u2208 Vi we have vi ( a ) \u2212 Pi ( vi , v \u2212 i ) \u2265 vi ( b ) \u2212 Pi ( v0 i , v \u2212 i ) , where a = f ( vi , v \u2212 i ) and b = f ( v0 i , v \u2212 i ) .\nA basic question that arises is given an algorithm f : V1 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 V. 7 \u2192 A , do there exist prices that will make the resulting mechanism truthful ?\nIt is well known ( see e.g. [ 15 ] ) that the price Pi can only depend on the alternative chosen and the others ' declarations , that is , we may write Pi : V \u2212 i \u00d7 A 7 \u2192 R. Thus , truthfulness implies that for every i , every v \u2212 i \u2208 V \u2212 i , and any vi , v0i \u2208 Vi with f ( vi , v \u2212 i ) = a and f ( v0 i , v \u2212 i ) = b , we have vi ( a ) \u2212 Pi ( a , v \u2212 i ) \u2265 vi ( b ) \u2212 Pi ( b , v \u2212 i ) .\nNow fix a player i , and fix the declarations v \u2212 i of the others .\nWe seek an assignment to the variables { Pa } a \u2208 A such that vi ( a ) \u2212 vi ( b ) \u2265 Pa \u2212 Pb for every a , b \u2208 A and vi \u2208 Vi with f ( vi , v \u2212 i ) = a. ( Strictly speaking , we should use A0 = f ( Vi , v \u2212 i ) instead of A here . )\nDefine \u03b4a , b : = inf { vi ( a ) \u2212 vi ( b ) : vi \u2208 Vi , f ( vi , v \u2212 i ) = a } .\nWe can now rephrase the above price-assignment problem : we seek an assignment to the variables { Pa } a \u2208 A such that\nThis is easily solved by looking at the allocation graph and applying a standard basic result of graph theory .\nDefinition 3.1 ( Gui et al. [ 11 ] ) The allocation graph of f is a directed weighted graph G = ( A , E ) where E = A \u00d7 A and the weight of an edge b \u2192 a ( for any a , b \u2208 A ) is \u03b4a , b. Theorem 3.2 There exists a feasible assignment to ( 2 ) iff the allocation graph has no negative-length cycles .\nFurthermore , if all cycles are non-negative , a feasible assignment is\nobtained as follows : fix an arbitrary node a * E A and set Pa to be the length of the shortest path from a * to a .\nThis leads to the following definition , which is another way of phrasing the condition that the allocation graph have no negative cycles .\nwhere ak = f ( vki , v_i ) for 1 < k < K , and aK +1 = a1 .\nThus ( 3 ) `` reduces '' our mechanism design problem to a concrete algorithmic problem .\nFor most of this paper , we will consequently ignore any strategic considerations and focus on designing an approximation algorithm for minimizing makespan that satisfies ( 3 ) .\n4 .\nA GENERAL TECHNIQUE TO OBTAIN RANDOMIZED MECHANISMS\nIn this section , we consider the case of job-dependent Lj , Hj values ( with Lj < Hj ) , which generalizes the classical restricted-machines model ( where Hj = oo ) .\nWe show the power of randomization , by providing a general technique that converts any c-approximation algorithm into a 3c-approximation , truthful-in-expectation mechanism .\nThis is one of the few results that shows how to export approximation algorithms for a multidimensional problem into truthful mechanisms when the algorithm is given as a black box .\nOur construction and proof are simple , and based on two ideas .\nFirst , as outlined above , we prove truthfulness using cycle monotonicity .\nIt seems unlikely that for an arbitrary approximation algorithm given only as a black box , one would be able to come up with payments in order to prove truthfulness ; but cycle-monotonicity allows us to prove precisely this .\nSecond , we obtain our randomized mechanism by ( a ) first moving to a fractional domain , and constructing a fractional truthful mechanism that is allowed to return fractional assignments ; then ( b ) using a rounding procedure to express the fractional schedule as a convex combination of integer schedules .\nThis builds upon a theme introduced by Lavi and Swamy [ 16 ] , namely that of using fractional mechanisms to obtain truthful-in-expectation mechanisms .\nWe should point out however that one can not simply plug in the results of [ 16 ] .\nTheir results hold for social-welfaremaximization problems and rely on using VCG to obtain a fractional truthful mechanism .\nVCG however does not apply to makespan minimization , and in our case even the existence of a near-optimal fractional truthful mechanism is not known .\nWe use the following result adapted from [ 16 ] .\nLet OPT ( p ) denote the optimal makespan ( over integer schedules ) for instance p .\nAs our first step , we take a capproximation algorithm and convert it to a 2c-approximation fractional truthful mechanism .\nThis conversion works even when the approximation algorithm returns only a fractional schedule ( satisfying certain properties ) of makespan at most c \u2022 OPT ( p ) for every instance p .\nWe prove truthfulness by showing that the fractional algorithm satisfies cycle monotonicity ( 3 ) .\nNotice that the alternative-set of our fractional mechanism is finite ( although the set of all fractional assignments is infinite ) : its cardinality is at most that of the inputdomain , which is at most 2mn in the two-value case .\nThus , we can apply Corollary 3.4 here .\nTo convert this fractional truthful mechanism into a randomized truthful mechanism we need a randomized rounding procedure satisfying the requirements of Lemma 4.1 .\nFortunately , such a procedure is already provided by Kumar , Marathe , Parthasarathy , and Srinivasan [ 14 ] .\n1 .\nfor any i , j , E\u02c6Xij\u02dc = xij .\n2 .\nfor any i , Pj Xijpij < Pj xijpij + max { j : xij E ( 0,1 ) } pij with probability 1 .\nProperty 1 will be used to obtain truthfulness in expectation , and property 2 will allow us to prove an approximation guarantee .\nWe first show that any algorithm that returns a fractional assignment having certain properties satisfies cycle monotonicity .\nPPROOF .\nFix a player i , and the vector of processing times of the other players p_i .\nWe need to prove ( 3 ) , that is ,\nIf pkij is the same for all k ( either always Lj or always Hj ) , then the above inequality clearly holds .\nOtherwise we can\ndivide the indices 1 , ... , K , into maximal segments , where a maximal segment is a maximal set of consecutive indices\nmust be some k such that pkij = Hj > pk \u2212 1 ij = Lj .\nWe take k0 = k and then keep including indices in this segment till we reach a k such that pkij = Lj and pk +1 ij = Hj .\nWe set k00 = k , and then start a new maximal segment with index k00 + 1 .\nNote that k00 = 6 k0 and k00 + 1 = 6 k0 \u2212 1 .\nWe now have a subset of indices and we can continue recursively .\nSo all indices are included in some maximal segment .\nWe will show that for every such maximal segment k0 , k0 + 1 , ... , k00 , Pk , \u2212 1 \u2264 k < k , , xk +1 ` pk ij \u2212 pk +1 \u00b4 \u2265 0 .\nAdding this for each ij ij segment yields the desired inequality .\nSo now focus on a maximal segment k0 , k0 + 1 , ... , k00 \u2212 1 , k00 .\nThus , there is some k \u2217 such that for k0 \u2264 k < k \u2217 , we have pkij = Hj , and for k \u2217 \u2264 k \u2264 k00 , we have pk ij = Lj .\nNow the left hand side of the above inequality for this segment is simply xk ,\nWe now describe how to use a c-approximation algorithm to obtain an algorithm satisfying the property in Lemma 4.3 .\nFor simplicity , first suppose that the approximation algorithm returns an integral schedule .\nThe idea is to simply `` spread '' this schedule .\nWe take each job j assigned to a high machine and assign it to an extent 1/m on all machines ; for each job j assigned to a low machine , say i , we assign 1/m-fraction of it to the other machines where it is low , and assign its remaining fraction ( which is at least 1/m ) to i .\nThe resulting assignment clearly satisfies the desired properties .\nAlso observe that the load on any machine has at most increased by m1 \u00b7 ( load on other machines ) \u2264 makespan , and hence the makespan has at most doubled .\nThis `` spreading out '' can also be done if the initial schedule is fractional .\nWe now describe the algorithm precisely .\nAlgorithm 1 Let A be any algorithm that on any input p outputs a possibly fractional assignment x such that xij > 0 implies that pij \u2264 T , where T is the makespan of x. ( In particular , note that any algorithm that returns an integral assignment has these properties . )\nOur algorithm , which we call A0 , returns the following assignment xF .\nInitialize xFij = 0 for all i , j. For every i , j ,\nTheorem 4.4 Suppose algorithm A satisfies the conditions in Algorithm 1 and returns a makespan of at most c \u00b7 OPT ( p ) for every p. Then , the algorithm A0 constructed above is a 2c-approximation , cycle-monotone fractional algorithm .\nMoreover , if xFij > 0 on input p , then pij \u2264 c \u00b7 OPT ( p ) .\nPROOF .\nFirst , note that xF is a valid assignment : for every job j , Pi xFij = Pi xij + P i , i ,6 = i :p % j =p % , j = Lj ( xi , j \u2212 xij ) / m = Pi xij = 1 .\nWe also have that if pij = Hj then xFij = Pi , :p % , j = Hj xi , j/m \u2264 1/m .\nIf pij = Lj , then xF ij = xij ( 1 \u2212 ` / m ) + Pi ,6 = i xi , j/m where ` = | { i0 = 6 i :\nTheorem 4.4 combined with Lemmas 4.1 and 4.2 , gives a 3c-approximation , truthful-in-expectation mechanism .\nThe computation of payments will depend on the actual approximation algorithm used .\nSection 3 does however give an explicit procedure to compute payments ensuring truthfulness , though perhaps not in polynomial-time .\nTheorem 4.5 The procedure in Algorithm 1 converts any c-approximation fractional algorithm into a 3c-approximation , truthful-in-expectation mechanism .\nTaking A in Algorithm 1 to be the algorithm that returns an LP-optimum assignment satisfying the required conditions ( see [ 18 , 25 ] ) , we obtain a 3-approximation mechanism .\nCorollary 4.6 There is a truthful-in-expectation mechanism with approximation ratio 3 for the Lj-Hj setting .\n5 .\nA DETERMINISTIC MECHANISM FOR THE TWO-VALUES CASE\nWe now present a deterministic 2-approximation truthful mechanism for the case where pij \u2208 { L , H } for all i , j .\nIn the sequel , we will often say that j is assigned to a lowmachine to denote that j is assigned to a machine i where pij = L .\nWe will call a job j a low job of machine i if pij = L ; the low-load of i is the load on i due to its low jobs , i.e. , Pj :p % j = L xijpij .\nAs in Section 4 , our goal is to obtain an approximation algorithm that satisfies cycle monotonicity .\nWe first obtain a simplification of condition ( 3 ) for our two-values { L , H } scheduling domain ( Proposition 5.1 ) that will be convenient to work with .\nWe describe our algorithm in Section 5.1 .\nIn Section 5.2 , we bound its approximation guarantee and prove that it satisfies cycle-monotonicity .\nIn Section 5.3 , we compute explicit payments giving a truthful mechanism .\nFinally , in Section 5.4 we show that no deterministic mechanism can achieve the optimum makespan .\nDefine\nPlugging this into ( 3 ) and dividing by ( H \u2212 L ) , we get the following .\n5.1 A cycle-monotone approximation algorithm\nWe now describe an algorithm that satisfies condition ( 6 ) and achieves a 2-approximation .\nWe will assume that L , H are integers , which is without loss of generality .\nA core component of our algorithm will be a procedure that takes an integer load threshold T and computes an integer partial assignment x of jobs to machines such that ( a ) a job is only assigned to a low machine ; ( b ) the load on any machine is at most T ; and ( c ) the number of jobs assigned is maximized .\nSuch an assignment can be computed by solving a max-flow problem : we construct a directed bipartite graph with a node for every job j and every machine i , and an edge ( j , i ) of infinite capacity if pij = L .\nWe also add a source node s with edges ( s , j ) having capacity 1 , and sink node t with edges ( i , t ) having capacity bT/Lc .\nClearly any integer flow in this network corresponds to a valid integer partial assignment x of makespan at most T , where xij = 1 iff there is a flow of 1 on the edge from j to i .\nWe will therefore use the terms assignment and flow interchangeably .\nMoreover , there is always an integral max-flow ( since all capacities are integers ) .\nWe will often refer to such a max-flow as the max-flow for ( p , T ) .\nWe need one additional concept before describing the algorithm .\nThere could potentially be many max-flows and we will be interested in the most `` balanced '' ones , which we formally define as follows .\nFix some max-flow .\nLet ni p , T be the amount of flow on edge ( i , t ) ( or equivalently the number of jobs assigned to i in the corresponding schedule ) , and let np , T be the total size of the max-flow , i.e. , np , T = Pi nip , T .\nFor any T ' \u2264 T , define nip , T | T ' = min ( nip , T , T ' ) , that is , we\nThat is , in a prefix-maximal flow for ( p , T ) , if we truncate the flow at some T ' \u2264 T , we are left with a max-flow for ( p , T ' ) .\nAn elementary fact about flows is that if an assignment/flow x is not a maximum flow for ( p , T ) then there must be an augmenting path P = ( s , j1 , i1 , ... , jK , iK , t ) in the residual graph that allows us to increase the size of the flow .\nThe interpretation is that in the current assignment , j1 is unassigned , xi8j8 = 0 , which is denoted by the forward edges ( j ` , i ` ) , and xi8j8 +1 = 1 , which is denoted by the reverse edges ( i ` , j ` +1 ) .\nAugmenting x using P changes the assignment so that each j ` is assigned to i ` in the new assignment , which increases the value of the flow by 1 .\nA simple augmenting path does not decrease the load of any machine ; thus , one can argue that a prefix-maximal flow for a threshold T always exists .\nWe first compute a max-flow for threshold 1 , use simple augmenting paths to augment it to a max-flow for threshold 2 , and repeat , each time augmenting the max-flow for the previous threshold t to a max-flow for threshold t + 1 using simple augmenting paths .\nAlgorithm 2 Given a vector of processing times p , construct an assignment of jobs to machines as follows .\n1 .\nCompute T * ( p ) = min\u02d8T \u2265 H , T multiple of L : np , T \u00b7 L + ( n \u2212 np , T ) \u00b7 H \u2264 m \u00b7 T \u00af .\nNote that np , T \u00b7 L + ( n \u2212 np , T ) \u00b7 H \u2212 m \u00b7 T is a decreasing function of T , so T * ( p ) can be computed in polynomial time via binary search .\n2 .\nCompute a prefix-maximal flow for threshold T * ( p ) and the corresponding partial assignment ( i.e. , j is assigned to i iff there is 1 unit of flow on edge ( j , i ) ) .\n3 .\nAssign the remaining jobs , i.e. , the jobs unassigned in the flow-phase , in a greedy manner as follows .\nCon\nsider these jobs in an arbitrary order and assign each job to the machine with the current lowest load ( where the load includes the jobs assigned in the flow-phase ) .\nOur algorithm needs to compute a prefix-maximal assignment for the threshold T * ( p ) .\nThe proof showing the existence of a prefix-maximal flow only yields a pseudopolynomial time algorithm for computing it .\nBut notice that the max-flow remains the same for any T \u2265 T ' = n \u00b7 L .\nSo a prefix-maximal flow for T ' is also prefix-maximal for any T \u2265 T ' .\nThus , we only need to compute a prefix-maximal flow for T '' = min { T * ( p ) , T ' } .\nThis can be be done in polynomial time by using the iterative-augmenting-paths algorithm in the existence proof to compute iteratively the maxflow for the polynomially many multiples of L up to ( and including ) T '' .\n5.2 Analysis\nLet OPT ( p ) denote the optimal makespan for p .\nWe now prove that Algorithm 2 is a 2-approximation algorithm that satisfies cycle monotonicity .\nThis will then allow us to compute payments in Section 5.3 and prove Theorem 5.3 .\n5.2.1 Proof of approximation Claim 5.4 If OPT ( p ) < H , the makespan is at most OPT ( p ) .\nPROOF .\nIf OPT ( p ) < H , it must be that the optimal schedule assigns all jobs to low machines , so np , OPT ( p ) = n. Thus , we have T * ( p ) = L \u00b7 dHL e. Furthermore , since we compute a prefix-maximal flow for threshold T * ( p ) we have np , T * ( p ) | OPT ( p ) = np , OPT ( p ) = n , which implies that the load on each machine is at most OPT ( p ) .\nSo in this case the makespan is at most ( and hence exactly ) OPT ( p ) .\nPROOF .\nLet nOPT ( p ) be the number of jobs assigned to low machines in an optimum schedule .\nThe total load on all machines is exactly nOPT ( p ) \u00b7 L + ( n \u2212 nOPT ( p ) ) \u00b7 H , and is at most m \u00b7 OPT ( p ) , since every machine has load at most OPT ( p ) .\nSo taking T = L \u00b7 d OPT ( p )\nPROOF .\nSuppose j is assigned to machine i in step 3 .\nIf pij = L , then we must have nip , T * ( p ) = T \u2217 ( p ) , otherwise we could have assigned j to i in step 2 to obtain a flow of larger value .\nSo at the point just before j is assigned in step 3 , the load of each machine must be at least T \u2217 ( p ) .\nHence , the total load after j is assigned is at least m \u00b7 T \u2217 ( p ) + L > m \u00b7 T \u2217 ( p ) .\nBut the total load is also at most np , T * ( p ) \u00b7 L + ( n \u2212 np , T * ( p ) ) \u00b7 H < m \u00b7 T \u2217 ( p ) , yielding a contradiction .\nLemma 5.7 The above algorithm returns a schedule with makespan at most OPT ( p ) + max { L , H ( 1 \u2212 m1 ) J < 2 \u00b7 OPT ( p ) .\nPROOF .\nIf OPT ( p ) < H , then by Claim 5.4 , we are done .\nSo suppose OPT ( p ) > H. By Claim 5.5 , we know that T \u2217 ( p ) < OPT ( p ) + L .\nIf there are no unassigned jobs after step 2 of the algorithm , then the makespan is at most T \u2217 ( p ) and we are done .\nSo assume that there are some unassigned jobs after step 2 .\nWe will show that the makespan after step 3 is at most T + H ( 1 \u2212 1 ) where T = min { T \u2217 ( p ) , OPT ( p ) J. m maximum load , so li > T + H ( 1 \u2212 1 Suppose the claim is false .\nLet i be the machine with the ) .\nLet j be the last jobm assigned to i in step 3 , and consider the point just before it is assigned to i .\nSo li > T \u2212 H/m at this point .\nAlso since j is assigned to i , by our greedy rule , the load on all the other machines must be at least li .\nSo the total load after j is assigned , is at least H + m \u00b7 li > m \u00b7 T ( since pij = H by Claim 5.6 ) .\nAlso , for any assignment of jobs to machines in step 3 , the total load is at most np , T * ( p ) \u00b7 L + ( n \u2212 np , T * ( p ) ) \u00b7 H since there are np , T * ( p ) jobs assigned to low machines .\nTherefore , we must have m \u00b7 T < np , T * ( p ) \u00b7 L + ( n \u2212 np , T * ( p ) ) \u00b7 H .\nBut we will argue that m \u00b7 T > np , T * ( p ) \u00b7 L + ( n \u2212 np , T * ( p ) ) \u00b7 H , which yields a contradiction .\nIf T = T \u2217 ( p ) , this follows from the definition of T \u2217 ( p ) .\nIf T = OPT ( p ) , then letting nOPT ( p ) denote the number of jobs assigned to low machines in an optimum schedule , we have np , T * ( p ) > nOPT ( p ) .\nSo np , T * ( p ) \u00b7 L + ( n \u2212 np , T * ( p ) ) \u00b7 H < nOPT ( p ) \u00b7 L + ( n \u2212 nOPT ( p ) ) \u00b7 H .\nThis is exactly the total load in an optimum schedule , which is at most m \u00b7 OPT ( p ) .\n5.2.2 Proof of cycle monotonicity\nLemma 5.8 Consider any two instances p = ( pi , p \u2212 i ) and p0 = ( p0 i , p \u2212 i ) where p0i > pi , i.e. , p0ij > pij ` dj .\nIf T is a threshold such that np , T > np , , T , then every maximum flow x0 for ( p0 , T ) must assign all jobs j such that p0ij = L. PROOF .\nLet Gp , denote the residual graph for ( p0 , T ) and flow x0 .\nSuppose by contradiction that there exists a job j \u2217 with p0ij * = L that is unassigned by x0 .\nSince p0i > pi , all edges ( j , i ) that are present in the network for ( p0 , T ) are also present in the network for ( p , T ) .\nThus , x0 is a valid flow for ( p , T ) .\nBut it is not a max-flow , since np , T > np , , T .\nSo there exists an augmenting path P in the residual graph for ( p , T ) and flow x0 .\nObserve that node i must be included in P , otherwise P would also be an augmenting path in the residual graph Gp , contradicting the fact that x0 is a maxflow .\nIn particular , this implies that there is a path P0 C P from i to the sink t. Let P0 = ( i , j1 , i1 , ... , jK , iK , t ) .\nAll the edges of P0 are also present as edges in Gp , -- all reverse edges ( i ` , j ` +1 ) are present since such an edge implies that x0 i ` j ` +1 = 1 ; all forward edges ( j ` , i ` ) are present since i ` = 6 i so p0i ` j ` = pi ` j ` = L , and x0i ` j ` +1 = 0 .\nBut then there is an augmenting path ( j \u2217 , i , j1 , i1 , ... , jK , iK , t ) in Gp , which contradicts the maximality of x0 .\nL ~ denote the all-low processing time vector .\nDefine TiL ( p \u2212 i ) = T \u2217 ( ~ L , p \u2212 i ) .\nSince we are focusing on machine i , and p \u2212 i is fixed throughout , we abbreviate TiL ( p \u2212 i ) to TL .\nAlso , let pL = ( ~ L , p \u2212 i ) .\nNote that T \u2217 ( p ) > TL for every instance p = ( pi , p \u2212 i ) .\nCorollary 5.9 Let p = ( pi , p \u2212 i ) be any instance and let x be any prefix-maximal flow for ( p , T \u2217 ( p ) ) .\nThen , the low-load on machine i is at most TL .\nPROOF .\nLet T \u2217 = T \u2217 ( p ) .\nIf T \u2217 = TL , then this is clearly true .\nOtherwise , consider the assignment x truncated at TL .\nSince x is prefix-maximal , we know that this constitutes a max-flow for ( p , TL ) .\nAlso , np , T L < npL , T L because T \u2217 > TL .\nSo by Lemma 5.8 , this truncated flow must assign all the low jobs of i. Hence , there can not be a job j with pij = L that is assigned to i after the TL-threshold since then j would not be assigned by this truncated flow .\nThus , the low-load of i is at most TL .\nUsing these properties , we will prove the following key inequality : for any p1 = ( p \u2212 i , p1i ) and p2 = ( p \u2212 i , p2i ) ,\nwhere n2 ,1 H and n2 ,1 L are as defined in ( 4 ) and ( 5 ) , respectively .\nNotice that this immediately implies cycle monotonicity , since if we take p1 = pk and p2 = pk +1 , then ( 7 ) implies that npk , T L > npk +1 , T L \u2212 nk +1 , k\nPROOF .\nLet T1 = T \u2217 ( p1 ) and T2 = T \u2217 ( p2 ) .\nTake the prefix-maximal flow x2 for ( p2 , T2 ) , truncate it at TL , and remove all the jobs from this assignment that are counted in n2 ,1 H , that is , all jobs j such that x2ij = 1 , p2ij = L , p1ij = H. Denote this flow by x. Observe that x is a valid flow for ( p1 , TL ) , and the size of this flow is exactly np2 , T 2 | T L \u2212 n2 ,1\nare assigned by x since each such job j is high on i in p2 .\nSince T1 > TL , we must have np1 , T L < npL , T L .\nSo if we augment x to a max-flow for ( p1 , TL ) , then by Lemma 5.8 ( with p = pL and p0 = p1 ) , all the jobs corresponding to n2 ,1 L must be assigned in this max-flow .\nThus , the size of this max-flow is at least ( size of x ) + n2 ,1\nLemma 5.11 Suppose T \u2217 ( p1 ) = TL .\nThen ( 7 ) holds .\nPROOF .\nAgain let T1 = T \u2217 ( p1 ) = TL and T2 = T \u2217 ( p2 ) .\nLet x1 , x2 be the complete assignment , i.e. , the assignment after both steps 2 and 3 , computed by our algorithm for p1 , p2 respectively .\nLet S = { j : x2ij = 1 and p2ij = L } and S00 = { j : x2ij = 1 and p1ij = L } .\nTherefore , | S00 | = | S | \u2212 n2 ,1\nL and | S | = nip2 , T 2 = nip2 , T 2 | T L ( by Corollary 5.9 ) .\nLet T00 = | S00 | \u00b7 L .\nWe consider two cases .\nSuppose first that T00 < TL .\nConsider the following flow for ( p1 , TL ) : assign to every machine other than i the lowassignment of x2 truncated at TL , and assign the jobs in S00 to machine i .\nThis is a valid flow for ( p1 , TL ) since the load on i is T00 < TL .\nIts size is equal to Ei ,6 = i ni , p2 , T 2 | T L + | S00 | = np2 , T 2 | T L \u2212 n2 ,1\nLet N be the number of jobs assigned to machine i in x2 .\nThe load on machine i is | S | \u00b7 L + ( N \u2212 | S | ) \u00b7 H \u2265 | S00 | \u00b7 L \u2212 n2 ,1 L \u00b7 L + ( N \u2212 | S | ) \u00b7 H which is at least | S00 | \u00b7 L > T\u02c6 since n2 ,1 L \u2264 N \u2212 | S | .\nThus we get the inequality | S00 | \u00b7 L + ( N \u2212 | S00 | ) \u00b7 H > T\u02c6 .\nNow consider the point in the execution of the algorithm on instance p2 just before the last high job is assigned to i in Step 3 ( there must be such a job since n2 ,1 L > 0 ) .\nThe load on i at this point is | S | \u00b7 L + ( N \u2212 | S | \u2212 1 ) \u00b7 H which is least | S00 | \u00b7 L \u2212 L = T\u02c6 by a similar argument as above .\nBy the greedy property , every i0 = 6 i also has at least this load at this point , so Pj p2i0jx2i0j \u2265 T\u02c6 .\nAdding these inequalities for all i0 = 6 i , and the earlier inequality for i , we get that\n5.3 Computation of prices\nLemmas 5.7 and 5.12 show that our algorithm is a 2approximation algorithm that satisfies cycle monotonicity .\nThus , by the discussion in Section 3 , there exist prices that yield a truthful mechanism .\nTo obtain a polynomial-time mechanism , we also need to show how to compute these prices ( or payments ) in polynomial-time .\nIt is not clear , if the procedure outlined in Section 3 based on computing shortest paths in the allocation graph yields a polynomial time algorithm , since the allocation graph has an exponential number of nodes ( one for each output assignment ) .\nInstead of analyzing the allocation graph , we will leverage our proof of cycle monotonicity , in particular , inequality ( 7 ) , and simply spell out the payments .\nRecall that the utility of a player is ui = Pi \u2212 li , where Pi is the payment made to player i. For convenience , we will first specify negative payments ( i.e. , the Pis will actually be prices charged to the players ) and then show that these can be modified so that players have non-negative utilities ( if they act truthfully ) .\nLet Hi denote the number of jobs assigned to machine i in step 3 .\nBy Corollary 5.6 , we know that all these jobs are assigned to high machines ( according to the declared pis ) .\nLet H_i = Pi06 = i Hi0 and n_i\nWe can interpret our payments as equating the player 's cost to a careful modification of the total load ( in the spirit of VCG prices ) .\nThe first and second terms in ( 10 ) , when subtracted from i 's load li equate i 's cost to the total load .\nThe term np , T \u2217 ( p ) \u2212 np , TiL ( p \u2212 i ) is in fact equal to n_i\np , T \u2217 ( p ) | TiL ( p \u2212 i ) since the low-load on i is at most TiL ( p_i ) ( by Claim 5.9 ) .\nThus the last term in equation ( 10 ) implies that we treat the low jobs that were assigned beyond the TiL ( p_i ) threshold ( to machines other than i ) effectively as high jobs for the total utility calculation from i 's point of view .\nIt is not clear how one could have conjured up these payments a priori in order to prove the truthfulness of our algorithm .\nHowever , by relying on cycle monotonicity , we were not only able to argue the existence of payments , but also our proof paved the way for actually inferring these payments .\nThe following lemma explicitly verifies that the payments defined above do indeed give a truthful mechanism .\nLemma 5.13 Fix a player i and the other players ' declarations p_i .\nLet i 's true type be p1i .\nThen , under the payments defined in ( 10 ) , i 's utility when she declares her true type p1i is at least her utility when she declares any other type p2i .\nPROOF .\nLet c1i , c2i denote i 's total cost , defined as the negative of her utility , when she declares p1 , and p2 , respectively ( and the others declare p_i ) .\nSince p_i is fixed , we omit p_i from the expressions below for notational clarity .\nThe true load of i when she declares her true type p1i is\nPrice specifications are commonly required to satisfy , in addition to truthfulness , individual rationality , i.e. , a player 's utility should be non-negative if she reveals her true value .\nThe payments given by ( 10 ) are not individually rational as they actually charge a player a certain amount .\nHowever , it is well-known that this problem can be easily solved by adding a large-enough constant to the price definition .\nIn our case , for example , letting H ~ denote the vector of all H 's , we can add the term n \u00b7 H \u2212 ( H \u2212 L ) n ( ~ H , p \u2212 i ) , TiL ( p \u2212 i ) to ( 10 ) .\nNote that this is a constant for player i. Thus , the new payments are Pi0 ( p ) = n \u00b7 H \u2212 L \u00b7 n_i\nby ( 11 ) , this will indeed result in a non-negative utility for i ( since n ( ~ H , p \u2212 i ) , TiL ( p \u2212 i ) \u2264 n ( pi , p \u2212 i ) , TiL ( p \u2212 i ) for any type pi of player i ) .\nThis modification also ensures the additionally desired normalization property that if a player receives no jobs then she receives zero payment : if player i receives the empty set for some type pi then she will also receive the empty set for the type H ~ ( this is easy to verify for our specific algorithm ) , and for the type ~ H , her utility equals zero ; thus , by truthfulness this must also be the utility of every other declaration that results in i receiving the empty set .\nThis completes the proof of Theorem 5.3 .\n5.4 Impossibility of exact implementation\nWe now show that irrespective of computational considerations , there does not exist a cycle-monotone algorithm for the L-H case with an approximation ratio better than 1.14 .\nLet H = \u03b1 \u00b7 L for some 2 < \u03b1 < 2.5 that we will choose later .\nThere are two machines I , II and seven jobs .\nConsider the following two scenarios : Scenario 1 .\nEvery job has the same processing time on both machines : jobs 1 -- 5 , are L , and jobs 6 , 7 are H. Any optimal schedule assigns jobs 1 -- 5 to one machine and jobs 6 , 7 to the other , and has makespan OPT1 = 5L .\nThe secondbest schedule has makespan at least Second1 = 2H + L. Scenario 2 .\nIf the algorithm chooses an optimal schedule for scenario 1 , assume without loss of generality that jobs 6 , 7 are assigned to machine II .\nIn scenario 2 , machine I has the same processing-time vector .\nMachine II lowers jobs 6 , 7 to L and increases 1 -- 5 to H .\nAn optimal schedule has makespan 2L + H , where machine II gets jobs 6 , 7 and one of the jobs 1 -- 5 .\nThe second-best schedule for this scenario has makespan at least Second2 = 5L .\nTheorem 5.14 No deterministic truthful mechanism for the two-value scheduling problem can obtain an approximation ratio better than 1.14 .\nPROOF .\nWe first argue that a cycle-monotone algorithm can not choose the optimal schedule in both scenarios .\nThis follows because otherwise cycle monotonicity is violated for machine II .\nTaking p1 II , p2 time vectors for scenarios 1 , 2 respectively , we getP II to be machine II 's processing\n, j \u2212 p2 II , j ) ( x2 II , j \u2212 x1 II , j ) = ( L \u2212 H ) ( 1 \u2212 0 ) < 0 .\nThus , any truthful mechanism must return a sub-optimal makespan in at least one scenario , and therefore its approximation ratio is at least min\u02d8 Second1 OPT1 , Second2 \u00af > -- 1.14 for \u03b1 = 2.364 .\nOPT2 We remark that for the { Lj , Hj } - case where there is a Hj common ratio r = for all jobs ( this generalizes the Lj restricted-machines setting ) one can obtain a fractional truthful mechanism ( with efficiently computable prices ) that returns a schedule of makespan at most OPT ( p ) for every p .\nOne can view each job j as consisting of Lj sub-jobs of size 1 on a machine i if pij = Lj , and size r if pij = Hj .\nFor this new instance \u02dcp , note that \u02dcpij E { 1 , r } for every i , j. Notice also that any assignment x\u02dc for the instance p\u02dc translates to a fractional assignment x for p , where pijxij = P jl : sub-job of j\u02dcpij \u02dcxij .\nThus , if we use Algorithm 2 to obtain a schedule for the instance \u02dcp , equation ( 6 ) translates precisely to ( 3 ) for the assignment x ; moreover , the prices for p\u02dc translate to prices for the instance p .\nThe number of sub-jobs assigned to low-machines in the flow-phase is simply the total work assigned to low-machines .\nThus , we can implement the above reduction by setting up a max-flow problem that seems to maximize the total work assigned to low machines .\nMoreover , since we have a fractional domain , we can use a more efficient greedy rule for packing the unassigned portions of jobs and argue that the fractional assignment has makespan at most OPT ( p ) .\nThe assignment x need not however satisfy the condition that xij > 0 implies pij < OPT ( p ) for arbitrary r , therefore , the rounding procedure of Lemma 4.2 does not yield a 2-approximation truthful-in-expectation mechanism .\nBut if r > OPT ( p ) ( as in the restricted-machines setting ) , this condition does hold , so we get a 2-approximation truthful mechanism ."} {"id": "C-32", "title": "", "abstract": "", "keyphrases": ["object storag system", "collabor strong-consist applic", "wide-area network", "cooper web cach", "fine-grain share", "transact", "fault-toler properti", "buddycach", "domin perform cost", "optimist system", "peer fetch", "multi-user oo7 benchmark", "cooper cach", "fine-grain share", "fault-toler"], "prmu": [], "lvl-1": "BuddyCache: High-Performance Object Storage for Collaborative Strong-Consistency Applications in a WAN \u2217 Magnus E. Bjornsson and Liuba Shrira Department of Computer Science Brandeis University Waltham, MA 02454-9110 {magnus, liuba}@cs.\nbrandeis.edu ABSTRACT Collaborative applications provide a shared work environment for groups of networked clients collaborating on a common task.\nThey require strong consistency for shared persistent data and efficient access to fine-grained objects.\nThese properties are difficult to provide in wide-area networks because of high network latency.\nBuddyCache is a new transactional caching approach that improves the latency of access to shared persistent objects for collaborative strong-consistency applications in high-latency network environments.\nThe challenge is to improve performance while providing the correctness and availability properties of a transactional caching protocol in the presence of node failures and slow peers.\nWe have implemented a BuddyCache prototype and evaluated its performance.\nAnalytical results, confirmed by measurements of the BuddyCache prototype using the multiuser 007 benchmark indicate that for typical Internet latencies, e.g. ranging from 40 to 80 milliseconds round trip time to the storage server, peers using BuddyCache can reduce by up to 50% the latency of access to shared objects compared to accessing the remote servers directly.\nCategories and Subject Descriptors C.2.4 [Computer Systems Organization]: Distributed Systems General Terms Design, Performance 1.\nINTRODUCTION Improvements in network connectivity erode the distinction between local and wide-area computing and, increasingly, users expect their work environment to follow them wherever they go.\nNevertheless, distributed applications may perform poorly in wide-area network environments.\nNetwork bandwidth problems will improve in the foreseeable future, but improvement in network latency is fundamentally limited.\nBuddyCache is a new object caching technique that addresses the network latency problem for collaborative applications in wide-area network environment.\nCollaborative applications provide a shared work environment for groups of networked users collaborating on a common task, for example a team of engineers jointly overseeing a construction project.\nStrong-consistency collaborative applications, for example CAD systems, use client/server transactional object storage systems to ensure consistent access to shared persistent data.\nUp to now however, users have rarely considered running consistent network storage systems over wide-area networks as performance would be unacceptable [24].\nFor transactional storage systems, the high cost of wide-area network interactions to maintain data consistency is the main cost limiting the performance and therefore, in wide-area network environments, collaborative applications have been adapted to use weaker consistency storage systems [22].\nAdapting an application to use weak consistency storage system requires significant effort since the application needs to be rewritten to deal with a different storage system semantics.\nIf shared persistent objects could be accessed with low-latency, a new field of distributed strong-consistency applications could be opened.\nCooperative web caching [10, 11, 15] is a well-known approach to reducing client interaction with a server by allowing one client to obtain missing objects from a another client instead of the server.\nCollaborative applications seem a particularly good match to benefit from this approach since one of the hard problems, namely determining what objects are cached where, becomes easy in small groups typical of collaborative settings.\nHowever, cooperative web caching techniques do not provide two important properties needed by collaborative applications, strong consistency and efficient 26 access to fine-grained objects.\nCooperative object caching systems [2] provide these properties.\nHowever, they rely on interaction with the server to provide fine-grain cache coherence that avoids the problem of false sharing when accesses to unrelated objects appear to conflict because they occur on the same physical page.\nInteraction with the server increases latency.\nThe contribution of this work is extending cooperative caching techniques to provide strong consistency and efficient access to fine-grain objects in wide-area environments.\nConsider a team of engineers employed by a construction company overseeing a remote project and working in a shed at the construction site.\nThe engineers use a collaborative CAD application to revise and update complex project design documents.\nThe shared documents are stored in transactional repository servers at the company home site.\nThe engineers use workstations running repository clients.\nThe workstations are interconnected by a fast local Ethernet but the network connection to the home repository servers is slow.\nTo improve access latency, clients fetch objects from repository servers and cache and access them locally.\nA coherence protocol ensures that client caches remain consistent when objects are modified.\nThe performance problem facing the collaborative application is coordinating with the servers consistent access to shared objects.\nWith BuddyCache, a group of close-by collaborating clients, connected to storage repository via a high-latency link, can avoid interactions with the server if needed objects, updates or coherency information are available in some client in the group.\nBuddyCache presents two main technical challenges.\nOne challenge is how to provide efficient access to shared finegrained objects in the collaborative group without imposing performance overhead on the entire caching system.\nThe other challenge is to support fine-grain cache coherence in the presence of slow and failed nodes.\nBuddyCache uses a redirection approach similar to one used in cooperative web caching systems [11].\nA redirector server, interposed between the clients and the remote servers, runs on the same network as the collaborating group and, when possible, replaces the function of the remote servers.\nIf the client request can not be served locally, the redirector forwards it to a remote server.\nWhen one of the clients in the group fetches a shared object from the repository, the object is likely to be needed by other clients.\nBuddyCache redirects subsequent requests for this object to the caching client.\nSimilarly, when a client creates or modifies a shared object, the new data is likely to be of potential interest to all group members.\nBuddyCache uses redirection to support peer update, a lightweight application-level multicast technique that provides group members with consistent access to the new data committed within the collaborating group without imposing extra overhead outside the group.\nNevertheless, in a transactional system, redirection interferes with shared object availability.\nSolo commit, is a validation technique used by BuddyCache to avoid the undesirable client dependencies that reduce object availability when some client nodes in the group are slow, or clients fail independently.\nA salient feature of solo commit is supporting fine-grained validation using inexpensive coarse-grained coherence information.\nSince redirection supports the performance benefits of reducing interaction with the server but introduces extra processing cost due to availability mechanisms and request forwarding, this raises the question is the cure worse than the disease?\nWe designed and implemented a BuddyCache prototype and studied its performance benefits and costs using analytical modeling and system measurements.\nWe compared the storage system performance with and without BuddyCache and considered how the cost-benefit balance is affected by network latency.\nAnalytical results, supported by measurements based on the multi-user 007 benchmark, indicate that for typical Internet latencies BuddyCache provides significant performance benefits, e.g. for latencies ranging from 40 to 80 milliseconds round trip time, clients using the BuddyCache can reduce by up to 50% the latency of access to shared objects compared to the clients accessing the repository directly.\nThese strong performance gains could make transactional object storage systems more attractive for collaborative applications in wide-area environments.\n2.\nRELATED WORK Cooperative caching techniques [20, 16, 13, 2, 28] provide access to client caches to avoid high disk access latency in an environment where servers and clients run on a fast local area network.\nThese techniques use the server to provide redirection and do not consider issues of high network latency.\nMultiprocessor systems and distributed shared memory systems [14, 4, 17, 18, 5] use fine-grain coherence techniques to avoid the performance penalty of false sharing but do not address issues of availability when nodes fail.\nCooperative Web caching techniques, (e.g. [11, 15]) investigate issues of maintaining a directory of objects cached in nearby proxy caches in wide-area environment, using distributed directory protocols for tracking cache changes.\nThis work does not consider issues of consistent concurrent updates to shared fine-grained objects.\nCheriton and Li propose MMO [12] a hybrid web coherence protocol that combines invalidations with updates using multicast delivery channels and receiver-reliable protocol, exploiting locality in a way similar to BuddyCache.\nThis multicast transport level solution is geared to the single writer semantics of web objects.\nIn contrast, BuddyCache uses application level multicast and a sender-reliable coherence protocol to provide similar access latency improvements for transactional objects.\nApplication level multicast solution in a middle-ware system was described by Pendarakis, Shi and Verma in [27].\nThe schema supports small multi-sender groups appropriate for collaborative applications and considers coherence issues in the presence of failures but does not support strong consistency or fine-grained sharing.\nYin, Alvisi, Dahlin and Lin [32, 31] present a hierarchical WAN cache coherence scheme.\nThe protocol uses leases to provide fault-tolerant call-backs and takes advantage of nearby caches to reduce the cost of lease extensions.\nThe study uses simulation to investigate latency and fault tolerance issues in hierarchical avoidance-based coherence scheme.\nIn contrast, our work uses implementation and analysis to evaluate the costs and benefits of redirection and fine grained updates in an optimistic system.\nAnderson, Eastham and Vahdat in WebFS [29] present a global file system coherence protocol that allows clients to choose 27 on per file basis between receiving updates or invalidations.\nUpdates and invalidations are multicast on separate channels and clients subscribe to one of the channels.\nThe protocol exploits application specific methods e.g. last-writer-wins policy for broadcast applications, to deal with concurrent updates but is limited to file systems.\nMazieres studies a bandwidth saving technique [24] to detect and avoid repeated file fragment transfers across a WAN when fragments are available in a local cache.\nBuddyCache provides similar bandwidth improvements when objects are available in the group cache.\n3.\nBUDDYCACHE High network latency imposes performance penalty for transactional applications accessing shared persistent objects in wide-area network environment.\nThis section describes the BuddyCache approach for reducing the network latency penalty in collaborative applications and explains the main design decisions.\nWe consider a system in which a distributed transactional object repository stores objects in highly reliable servers, perhaps outsourced in data-centers connected via high-bandwidth reliable networks.\nCollaborating clients interconnected via a fast local network, connect via high-latency, possibly satellite, links to the servers at the data-centers to access shared persistent objects.\nThe servers provide disk storage for the persistent objects.\nA persistent object is owned by a single server.\nObjects may be small (order of 100 bytes for programming language objects [23]).\nTo amortize the cost of disk and network transfer objects are grouped into physical pages.\nTo improve object access latency, clients fetch the objects from the servers and cache and access them locally.\nA transactional cache coherence protocol runs at clients and servers to ensure that client caches remain consistent when objects are modified.\nThe performance problem facing the collaborating client group is the high latency of coordinating consistent access to the shared objects.\nBuddyCache architecture is based on a request redirection server, interposed between the clients and the remote servers.\nThe interposed server (the redirector) runs on the same network as the collaborative group and, when possible, replaces the function of the remote servers.\nIf the client request can be served locally, the interaction with the server is avoided.\nIf the client request can not be served locally, redirector forwards it to a remote server.\nRedirection approach has been used to improve the performance of web caching protocols.\nBuddyCache redirector supports the correctness, availability and fault-tolerance properties of transactional caching protocol [19].\nThe correctness property ensures onecopy serializability of the objects committed by the client transactions.\nThe availability and fault-tolerance properties ensure that a crashed or slow client does not disrupt any other client``s access to persistent objects.\nThe three types of client server interactions in a transactional caching protocol are the commit of a transaction, the fetch of an object missing in a client cache, and the exchange of cache coherence information.\nBuddyCache avoids interactions with the server when a missing object, or cache coherence information needed by a client is available within the collaborating group.\nThe redirector always interacts with the servers at commit time because only storage servers provide transaction durability in a way that ensures committed Client Redirector Client Client Buddy Group Client Redirector Client Client Buddy Group Servers Figure 1: BuddyCache.\ndata remains available in the presence of client or redirector failures.\nFigure 1 shows the overall BuddyCache architecture.\n3.1 Cache Coherence The redirector maintains a directory of pages cached at each client to provide cooperative caching [20, 16, 13, 2, 28], redirecting a client fetch request to another client that caches the requested object.\nIn addition, redirector manages cache coherence.\nSeveral efficient transactional cache coherence protocols [19] exist for persistent object storage systems.\nProtocols make different choices in granularity of data transfers and granularity of cache consistency.\nThe current best-performing protocols use page granularity transfers when clients fetch missing objects from a server and object granularity coherence to avoid false (page-level) conflicts.\nThe transactional caching taxonomy [19] proposed by Carey, Franklin and Livny classifies the coherence protocols into two main categories according to whether a protocol avoids or detects access to stale objects in the client cache.\nThe BuddyCache approach could be applied to both categories with different performance costs and benefits in each category.\nWe chose to investigate BuddyCache in the context of OCC [3], the current best performing detection-based protocol.\nWe chose OCC because it is simple, performs well in high-latency networks, has been implemented and we had access to the implementation.\nWe are investigating BuddyCache with PSAA [33], the best performing avoidancebased protocol.\nBelow we outline the OCC protocol [3].\nThe OCC protocol uses object-level coherence.\nWhen a client requests a missing object, the server transfers the containing page.\nTransaction can read and update locally cached objects without server intervention.\nHowever, before a transaction commits it must be validated; the server must make sure the validating transaction has not read a stale version of some object that was updated by a successfully committed or validated transaction.\nIf validation fails, the transaction is aborted.\nTo reduce the number and cost of aborts, 28 Helper Requester A:p Fetch pPeer fetch p Page p Redirector Figure 2: Peer fetch a server sends background object invalidation messages to clients caching the containing pages.\nWhen clients receive invalidations they remove stale objects from the cache and send background acknowledgments to let server know about this.\nSince invalidations remove stale objects from the client cache, invalidation acknowledgment indicates to the server that a client with no outstanding invalidations has read upto-date objects.\nAn unacknowledged invalidation indicates a stale object may have been accessed in the client cache.\nThe validation procedure at the server aborts a client transaction if a client reads an object while an invalidation is outstanding.\nThe acknowledged invalidation mechanism supports object-level cache coherence without object-based directories or per-object version numbers.\nAvoiding per-object overheads is very important to reduce performance penalties [3] of managing many small objects, since typical objects are small.\nAn important BuddyCache design goal is to maintain this benefit.\nSince in BuddyCache a page can be fetched into a client cache without server intervention (as illustrated in figure 2), cache directories at the servers keep track of pages cached in each collaborating group rather than each client.\nRedirector keeps track of pages cached in each client in a group.\nServers send to the redirector invalidations for pages cached in the entire group.\nThe redirector propagates invalidations from servers to affected clients.\nWhen all affected clients acknowledge invalidations, redirector can propagate the group acknowledgment to the server.\n3.2 Light-weight Peer Update When one of the clients in the collaborative group creates or modifies shared objects, the copies cached by any other client become stale but the new data is likely to be of potential interest to the group members.\nThe goal in BuddyCache is to provide group members with efficient and consistent access to updates committed within the group without imposing extra overhead on other parts of the storage system.\nThe two possible approaches to deal with stale data are cache invalidations and cache updates.\nCache coherence studies in web systems (e.g. [7]) DSM systems (e.g. [5]), and transactional object systems (e.g. [19]) compare the benefits of update and invalidation.\nThe studies show the Committing Client Server Redirector x2.\nStore x 6.\nUpdate x 3.\nCommit x 4.\nCommit OK 5.\nCommit OK1.\nCommit x Figure 3: Peer update.\nbenefits are strongly workload-dependent.\nIn general, invalidation-based coherence protocols are efficient since invalidations are small, batched and piggybacked on other messages.\nMoreover, invalidation protocols match the current hardware trend for increasing client cache sizes.\nLarger caches are likely to contain much more data than is actively used.\nUpdate-based protocols that propagate updates to low-interest objects in a wide-area network would be wasteful.\nNevertheless, invalidation-based coherence protocols can perform poorly in high-latency networks [12] if the object``s new value is likely to be of interest to another group member.\nWith an invalidation-based protocol, one member``s update will invalidate another member``s cached copy, causing the latter to perform a high-latency fetch of the new value from the server.\nBuddyCache circumvents this well-known bandwidth vs. latency trade-off imposed by update and invalidation protocols in wide-area network environments.\nIt avoids the latency penalty of invalidations by using the redirector to retain and propagate updates committed by one client to other clients within the group.\nThis avoids the bandwidth penalty of updates because servers propagate invalidations to the redirectors.\nAs far as we know, this use of localized multicast in BuddyCache redirector is new and has not been used in earlier caching systems.\nThe peer update works as follows.\nAn update commit request from a client arriving at the redirector contains the object updates.\nRedirector retains the updates and propagates the request to the coordinating server.\nAfter the transaction commits, the coordinator server sends a commit reply to the redirector of the committing client group.\nThe redirector forwards the reply to the committing client, and also propagates the retained committed updates to the clients caching the modified pages (see figure 3).\nSince the groups outside the BuddyCache propagate invalidations, there is no extra overhead outside the committing group.\n3.3 Solo commit In the OCC protocol, clients acknowledge server invalidations (or updates) to indicate removal of stale data.\nThe straightforward group acknowledgement protocol where redirector collects and propagates a collective acknowledge29 Redirector commit ok ABORT Client 1 Client 2 Server commit (P(x)) commit (P(x)) ok + inv(P(x)) inv(P(x)) commit(P(x)) commit(P(x)) ack(P(x)) ack(P(x)) Figure 4: Validation with Slow Peers ment to the server, interferes with the availability property of the transactional caching protocol [19] since a client that is slow to acknowledge an invalidation or has failed can delay a group acknowledgement and prevent another client in the group from committing a transaction.\nE.g. an engineer that commits a repeated revision to the same shared design object (and therefore holds the latest version of the object) may need to abort if the group acknowledgement has not propagated to the server.\nConsider a situation depicted in figure 4 where Client1 commits a transaction T that reads the latest version of an object x on page P recently modified by Client1.\nIf the commit request for T reaches the server before the collective acknowledgement from Client2 for the last modification of x arrives at the server, the OCC validation procedure considers x to be stale and aborts T (because, as explained above, an invalidation unacknowledged by a client, acts as indication to the server that the cached object value is stale at the client).\nNote that while invalidations are not required for the correctness of the OCC protocol, they are very important for the performance since they reduce the performance penalties of aborts and false sharing.\nThe asynchronous invalidations are an important part of the reason OCC has competitive performance with PSAA [33], the best performing avoidance-based protocol [3].\nNevertheless, since invalidations are sent and processed asynchronously, invalidation processing may be arbitrarily delayed at a client.\nLease-based schemes (time-out based) have been proposed to improve the availability of hierarchical callback-based coherence protocols [32] but the asynchronous nature of invalidations makes the lease-based approaches inappropriate for asynchronous invalidations.\nThe Solo commit validation protocol allows a client with up-to-date objects to commit a transaction even if the group acknowledgement is delayed due to slow or crashed peers.\nThe protocol requires clients to include extra information with the transaction read sets in the commit message, to indicate to the server the objects read by the transaction are up-to-date.\nObject version numbers could provide a simple way to track up-to-date objects but, as mentioned above, maintaining per object version numbers imposes unacceptably high overheads (in disk storage, I/O costs and directory size) on the entire object system when objects are small [23].\nInstead, solo commit uses coarse-grain page version numbers to identify fine-grain object versions.\nA page version number is incremented at a server when at transaction that modifies objects on the page commits.\nUpdates committed by a single transaction and corresponding invalidations are therefore uniquely identified by the modified page version number.\nPage version numbers are propagated to clients in fetch replies, commit replies and with invalidations, and clients include page version numbers in commit requests sent to the servers.\nIf a transaction fails validation due to missing group acknowledgement, the server checks page version numbers of the objects in the transaction read set and allows the transaction to commit if the client has read from the latest page version.\nThe page version numbers enable independent commits but page version checks only detect page-level conflicts.\nTo detect object-level conflicts and avoid the problem of false sharing we need the acknowledged invalidations.\nSection 4 describes the details of the implementation of solo commit support for fine-grain sharing.\n3.4 Group Configuration The BuddyCache architecture supports multiple concurrent peer groups.\nPotentially, it may be faster to access data cached in another peer group than to access a remote server.\nIn such case extending BuddyCache protocols to support multi-level peer caching could be worthwhile.\nWe have not pursued this possibility for several reasons.\nIn web caching workloads, simply increasing the population of clients in a proxy cache often increases the overall cache hit rate [30].\nIn BuddyCache applications, however, we expect sharing to result mainly from explicit client interaction and collaboration, suggesting that inter-group fetching is unlikely to occur.\nMoreover, measurements from multi-level web caching systems [9] indicate that a multilevel system may not be advantageous unless the network connection between the peer groups is very fast.\nWe are primarily interested in environments where closely collaborating peers have fast close-range connectivity, but the connection between peer groups may be slow.\nAs a result, we decided that support for inter-group fetching in BuddyCache is not a high priority right now.\nTo support heterogenous resource-rich and resource-poor peers, the BuddyCache redirector can be configured to run either in one of the peer nodes or, when available, in a separate node within the site infrastructure.\nMoreover, in a resource-rich infrastructure node, the redirector can be configured as a stand-by peer cache to receive pages fetched by other peers, emulating a central cache somewhat similar to a regional web proxy cache.\nFrom the BuddyCache cache coherence protocol point of view, however, such a stand-by peer cache is equivalent to a regular peer cache and therefore we do not consider this case separately in the discussion in this paper.\n4.\nIMPLEMENTATION In this section we provide the details of the BuddyCache implementation.\nWe have implemented BuddyCache in the Thor client/server object-oriented database [23].\nThor supports high performance access to distributed objects and therefore provides a good test platform to investigate BuddyCache performance.\n30 4.1 Base Storage System Thor servers provide persistent storage for objects and clients cache copies of these objects.\nApplications run at the clients and interact with the system by making calls on methods of cached objects.\nAll method calls occur within atomic transactions.\nClients communicate with servers to fetch pages or to commit a transaction.\nThe servers have a disk for storing persistent objects, a stable transaction log, and volatile memory.\nThe disk is organized as a collection of pages which are the units of disk access.\nThe stable log holds commit information and object modifications for committed transactions.\nThe server memory contains cache directory and a recoverable modified object cache called the MOB.\nThe directory keeps track of which pages are cached by which clients.\nThe MOB holds recently modified objects that have not yet been written back to their pages on disk.\nAs MOB fills up, a background process propagates modified objects to the disk [21, 26].\n4.2 Base Cache Coherence Transactions are serialized using optimistic concurrency control OCC [3] described in Section 3.1.\nWe provide some of the relevant OCC protocol implementation details.\nThe client keeps track of objects that are read and modified by its transaction; it sends this information, along with new copies of modified objects, to the servers when it tries to commit the transaction.\nThe servers determine whether the commit is possible, using a two-phase commit protocol if the transaction used objects at multiple servers.\nIf the transaction commits, the new copies of modified objects are appended to the log and also inserted in the MOB.\nThe MOB is recoverable, i.e. if the server crashes, the MOB is reconstructed at recovery by scanning the log.\nSince objects are not locked before being used, a transaction commit can cause caches to contain obsolete objects.\nServers will abort a transaction that used obsolete objects.\nHowever, to reduce the probability of aborts, servers notify clients when their objects become obsolete by sending them invalidation messages; a server uses its directory and the information about the committing transaction to determine what invalidation messages to send.\nInvalidation messages are small because they simply identify obsolete objects.\nFurthermore, they are sent in the background, batched and piggybacked on other messages.\nWhen a client receives an invalidation message, it removes obsolete objects from its cache and aborts the current transaction if it used them.\nThe client continues to retain pages containing invalidated objects; these pages are now incomplete with holes in place of the invalidated objects.\nPerforming invalidation on an object basis means that false sharing does not cause unnecessary aborts; keeping incomplete pages in the client cache means that false sharing does not lead to unnecessary cache misses.\nClients acknowledge invalidations to indicate removal of stale data as explained in Section 3.1.\nInvalidation messages prevent some aborts, and accelerate those that must happen - thus wasting less work and o\ufb04oading detection of aborts from servers to clients.\nWhen a transaction aborts, its client restores the cached copies of modified objects to the state they had before the transaction started; this is possible because a client makes a copy of an object the first time it is modified by a transaction.\n4.3 Redirection The redirector runs on the same local network as the peer group, in one of the peer nodes, or in a special node within the infrastructure.\nIt maintains a directory of pages available in the peer group and provides fast centralized fetch redirection (see figure 2) between the peer caches.\nTo improve performance, clients inform the redirector when they evict pages or objects by piggybacking that information on messages sent to the redirector.\nTo ensure up-to-date objects are fetched from the group cache the redirector tracks the status of the pages.\nA cached page is either complete in which case it contains consistent values for all the objects, or incomplete, in which case some of the objects on a page are marked invalid.\nOnly complete pages are used by the peer fetch.\nThe protocol for maintaining page status when pages are updated and invalidated is described in Section 4.4.\nWhen a client request has to be processed at the servers, e.g., a complete requested page is unavailable in the peer group or a peer needs to commit a transaction, the redirector acts as a server proxy: it forwards the request to the server, and then forwards the reply back to the client.\nIn addition, in response to invalidations sent by a server, the redirector distributes the update or invalidation information to clients caching the modified page and, after all clients acknowledge, propagates the group acknowledgment back to the server (see figure 3).\nThe redirector-server protocol is, in effect, the client-server protocol used in the base Thor storage system, where the combined peer group cache is playing the role of a single client cache in the base system.\n4.4 Peer Update The peer update is implemented as follows.\nAn update commit request from a client arriving at the redirector contains the object updates.\nRedirector retains the updates and propagates the request to the coordinator server.\nAfter a transaction commits, using a two phase commit if needed, the coordinator server sends a commit reply to the redirector of the committing client group.\nThe redirector forwards the reply to the committing client.\nIt waits for the invalidations to arrive to propagate corresponding retained (committed) updates to the clients caching the modified pages (see figure 3.)\nParticipating servers that are home to objects modified by the transaction generate object invalidations for each cache group that caches pages containing the modified objects (including the committing group).\nThe invalidations are sent lazily to the redirectors to ensure that all the clients in the groups caching the modified objects get rid of the stale data.\nIn cache groups other than the committing group, redirectors propagates the invalidations to all the clients caching the modified pages, collect the client acknowledgments and after completing the collection, propagate collective acknowledgments back to the server.\nWithin the committing client group, the arriving invalidations are not propagated.\nInstead, updates are sent to clients caching those objects'' pages, the updates are acknowledged by the client, and the collective acknowledgment is propagated to the server.\nAn invalidation renders a cached page unavailable for peer fetch changing the status of a complete page p into an incomplete.\nIn contrast, an update of a complete page preserves the complete page status.\nAs shown by studies of the 31 fragment reconstruction [2], such update propagation allows to avoid the performance penalties of false sharing.\nThat is, when clients within a group modify different objects on the same page, the page retains its complete status and remains available for peer fetch.\nTherefore, the effect of peer update is similar to eager fragment reconstruction [2].\nWe have also considered the possibility of allowing a peer to fetch an incomplete page (with invalid objects marked accordingly) but decided against this possibility because of the extra complexity involved in tracking invalid objects.\n4.5 Vcache The solo commit validation protocol allows clients with up-to-date objects to commit independently of slower (or failed) group members.\nAs explained in Section 3.3, the solo commit protocol allows a transaction T to pass validation if extra coherence information supplied by the client indicates that transaction T has read up-to-date objects.\nClients use page version numbers to provide this extra coherence information.\nThat is, a client includes the page version number corresponding to each object in the read object set sent in the commit request to the server.\nSince a unique page version number corresponds to each committed object update, the page version number associated with an object allows the validation procedure at the server to check if the client transaction has read up-to-date objects.\nThe use of coarse-grain page versions to identify object versions avoids the high penalty of maintaining persistent object versions for small objects, but requires an extra protocol at the client to maintain the mapping from a cached object to the identifying page version (ObjectToVersion).\nThe main implementation issue is concerned with maintaining this mapping efficiently.\nAt the server side, when modifications commit, servers associate page version numbers with the invalidations.\nAt validation time, if an unacknowledged invalidation is pending for an object x read by a transaction T, the validation procedure checks if the version number for x in T``s read set matches the version number for highest pending invalidation for x, in which case the object value is current, otherwise T fails validation.\nWe note again that the page version number-based checks, and the invalidation acknowledgment-based checks are complimentary in the solo commit validation and both are needed.\nThe page version number check allows the validation to proceed before invalidation acknowledgments arrive but by itself a page version number check detects page-level conflicts and is not sufficient to support fine-grain coherence without the object-level invalidations.\nWe now describe how the client manages the mapping ObjectToVersion.\nThe client maintains a page version number for each cached page.\nThe version number satisfies the following invariant V P about the state of objects on a page: if a cached page P has a version number v, then the value of an object o on a cached page P is either invalid or it reflects at least the modifications committed by transactions preceding the transaction that set P``s version number to v. New object values and new page version numbers arrive when a client fetches a page or when a commit reply or invalidations arrive for this page.\nThe new object values modify the page and, therefore, the page version number needs to be updated to maintain the invariant V P.\nA page version number that arrives when a client fetches a page, replaces Object Version x 8 Redirector Server 1Client 1 com(P(x,6),Q(y,9)) com(P(x,6),Q(y,9)) ok(P(x,8),Q(y,10)) ok(P(x,8),Q(y,10)) inv(Q(s,11)) inv(Q(s,11)) inv(P(r,7) inv(P(r,7) Server 2 Figure 5: Reordered Invalidations the page version number for this page.\nSuch an update preserves the invariant V P. Similarly, an in-sequence page version number arriving at the client in a commit or invalidation message advances the version number for the entire cached page, without violating V P. However, invalidations or updates and their corresponding page version numbers can also arrive at the client out of sequence, in which case updating the page version number could violate V P. For example, a commit reply for a transaction that updates object x on page P in server S1, and object y on page Q in server S2, may deliver a new version number for P from the transaction coordinator S1 before an invalidation generated for an earlier transaction that has modified object r on page P arrives from S1 (as shown in figure 5).\nThe cache update protocol ensures that the value of any object o in a cached page P reflects the update or invalidation with the highest observed version number.\nThat is, obsolete updates or invalidations received out of sequence do not affect the value of an object.\nTo maintain the ObjectToVersion mapping and the invariant V P in the presence of out-of-sequence arrival of page version numbers, the client manages a small version number cache vcache that maintains the mapping from an object into its corresponding page version number for all reordered version number updates until a complete page version number sequence is assembled.\nWhen the missing version numbers for the page arrive and complete a sequence, the version number for the entire page is advanced.\nThe ObjectToVersion mapping, including the vcache and page version numbers, is used at transaction commit time to provide version numbers for the read object set as follows.\nIf the read object has an entry in the vcache, its version number is equal to the highest version number in the vcache for this object.\nIf the object is not present in the vcache, its version number is equal the version number of its containing cached page.\nFigure 6 shows the ObjectToVersion mapping in the client cache, including the page version numbers for pages and the vcache.\nClient can limit vcache size as needed since re-fetching a page removes all reordered page version numbers from the vcache.\nHowever, we expect version number reordering to be uncommon and therefore expect the vcache to be very small.\n5.\nBUDDYCACHE FAILOVER A client group contains multiple client nodes and a redi32 VersionPageObject Version VCache Client Cache Client Page Cache Figure 6: ObjectToVersion map with vcache rector that can fail independently.\nThe goal of the failover protocol is to reconfigure the BuddyCache in the case of a node failure, so that the failure of one node does not disrupt other clients from accessing shared objects.\nMoreover, the failure of the redirector should allow unaffected clients to keep their caches intact.\nWe have designed a failover protocols for BuddyCache but have not implemented it yet.\nThe appendix outlines the protocol.\n6.\nPERFORMANCE EVALUATION BuddyCache redirection supports the performance benefits of avoiding communication with the servers but introduces extra processing cost due to availability mechanisms and request forwarding.\nIs the cure worse then the disease?\nTo answer the question, we have implemented a BuddyCache prototype for the OCC protocol and conducted experiments to analyze the performance benefits and costs over a range of network latencies.\n6.1 Analysis The performance benefits of peer fetch and peer update are due to avoided server interactions.\nThis section presents a simple analytical performance model for this benefit.\nThe avoided server interactions correspond to different types of client cache misses.\nThese can be cold misses, invalidation misses and capacity misses.\nOur analysis focuses on cold misses and invalidation misses, since the benefit of avoiding capacity misses can be derived from the cold misses.\nMoreover, technology trends indicate that memory and storage capacity will continue to grow and therefore a typical BuddyCache configuration is likely not to be cache limited.\nThe client cache misses are determined by several variables, including the workload and the cache configuration.\nOur analysis tries, as much as possible, to separate these variables so they can be controlled in the validation experiments.\nTo study the benefit of avoiding cold misses, we consider cold cache performance in a read-only workload (no invalidation misses).\nWe expect peer fetch to improve the latency cost for client cold cache misses by fetching objects from nearby cache.\nWe evaluate how the redirection cost affects this benefit by comparing and analyzing the performance of an application running in a storage system with BuddyCache and without (called Base).\nTo study the benefit of avoiding invalidation misses, we consider hot cache performance in a workload with modifications (with no cold misses).\nIn hot caches we expect BuddyCache to provide two complementary benefits, both of which reduce the latency of access to shared modified objects.\nPeer update lets a client access an object modified by a nearby collaborating peer without the delay imposed by invalidation-only protocols.\nIn groups where peers share a read-only interest in the modified objects, peer fetch allows a client to access a modified object as soon as a collaborating peer has it, which avoids the delay of server fetch without the high cost imposed by the update-only protocols.\nTechnology trends indicate that both benefits will remain important in the foreseeable future.\nThe trend toward increase in available network bandwidth decreases the cost of the update-only protocols.\nHowever, the trend toward increasingly large caches, that are updated when cached objects are modified, makes invalidation-base protocols more attractive.\nTo evaluate these two benefits we consider the performance of an application running without BuddyCache with an application running BuddyCache in two configurations.\nOne, where a peer in the group modifies the objects, and another where the objects are modified by a peer outside the group.\nPeer update can also avoid invalidation misses due to false-sharing, introduced when multiple peers update different objects on the same page concurrently.\nWe do not analyze this benefit (demonstrated by earlier work [2]) because our benchmarks do not allow us to control object layout, and also because this benefit can be derived given the cache hit rate and workload contention.\n6.1.1 The Model The model considers how the time to complete an execution with and without BuddyCache is affected by invalidation misses and cold misses.\nConsider k clients running concurrently accessing uniformly a shared set of N pages in BuddyCache (BC) and Base.\nLet tfetch(S), tredirect(S), tcommit(S), and tcompute(S) be the time it takes a client to, respectively, fetch from server, peer fetch, commit a transaction and compute in a transaction, in a system S, where S is either a system with BuddyCache (BC) or without (Base).\nFor simplicity, our model assumes the fetch and commit times are constant.\nIn general they may vary with the server load, e.g. they depend on the total number of clients in the system.\nThe number of misses avoided by peer fetch depends on k, the number of clients in the BuddyCache, and on the client co-interest in the shared data.\nIn a specific BuddyCache execution it is modeled by the variable r, defined as a number of fetches arriving at the redirector for a given version of page P (i.e. until an object on the page is invalidated).\nConsider an execution with cold misses.\nA client starts with a cold cache and runs read-only workload until it accesses all N pages while committing l transactions.\nWe assume there are no capacity misses, i.e. the client cache is large enough to hold N pages.\nIn BC, r cold misses for page P reach the redirector.\nThe first of the misses fetches P from the server, and the subsequent r \u2212 1 misses are redirected.\nSince each client accesses the entire shared set r = k. Let Tcold(Base) and Tcold(BC) be the time it takes to complete the l transactions in Base and BC.\n33 Tcold(Base) = N \u2217 tfetch(Base) +(tcompute + tcommit(Base)) \u2217 l (1) Tcold(BC) = N \u2217 1 k \u2217 tfetch(BC) + (1 \u2212 1 k ) \u2217 tredirect +(tcompute + tcommit(BC)) \u2217 l (2) Consider next an execution with invalidation misses.\nA client starts with a hot cache containing the working set of N pages.\nWe focus on a simple case where one client (writer) runs a workload with modifications, and the other clients (readers) run a read-only workload.\nIn a group containing the writer (BCW ), peer update eliminates all invalidation misses.\nIn a group containing only readers (BCR), during a steady state execution with uniform updates, a client transaction has missinv invalidation misses.\nConsider the sequence of r client misses on page P that arrive at the redirector in BCR between two consequent invalidations of page P.\nThe first miss goes to the server, and the r \u2212 1 subsequent misses are redirected.\nUnlike with cold misses, r \u2264 k because the second invalidation disables redirection for P until the next miss on P causes a server fetch.\nAssuming uniform access, a client invalidation miss has a chance of 1/r to be the first miss (resulting in server fetch), and a chance of (1 \u2212 1/r) to be redirected.\nLet Tinval(Base), Tinval(BCR) and Tinval(BCW ) be the time it takes to complete a single transaction in the Base, BCR and BCW systems.\nTinval(Base) = missinv \u2217 tfetch(Base) +tcompute + tcommit(Base) (3) Tinval(BCR) = missinv \u2217 ( 1 r \u2217 tfetch(BCR) +(1 \u2212 1 r ) \u2217 tredirect(BCR)) +tcompute + tcommit(BCR) (4) Tinval(BCW ) = tcompute + tcommit(BCW ) (5) In the experiments described below, we measure the parameters N, r, missinv, tfetch(S), tredirect(S), tcommit(S), and tcompute(S).\nWe compute the completion times derived using the above model and derive the benefits.\nWe then validate the model by comparing the derived values to the completion times and benefits measured directly in the experiments.\n6.2 Experimental Setup Before presenting our results we describe our experimental setup.\nWe use two systems in our experiments.\nThe Base system runs Thor distributed object storage system [23] with clients connecting directly to the servers.\nThe Buddy system runs our implementation of BuddyCache prototype in Thor, supporting peer fetch, peer update, and solo commit, but not the failover.\nOur workloads are based on the multi-user OO7 benchmark [8]; this benchmark is intended to capture the characteristics of many different multi-user CAD/CAM/CASE applications, but does not model any specific application.\nWe use OO7 because it is a standard benchmark for measuring object storage system performance.\nThe OO7 database contains a tree of assembly objects with leaves pointing to three composite parts chosen randomly from among 500 such objects.\nEach composite part contains a graph of atomic parts linked by connection objects; each atomic part has 3 outgoing connections.\nWe use a medium database that has 200 atomic parts per composite part.\nThe multi-user database allocates for each client a private module consisting of one tree of assembly objects, and adds an extra shared module that scales proportionally to the number of clients.\nWe expect a typical BuddyCache configuration not to be cache limited and therefore focus on workloads where the objects in the client working set fit in the cache.\nSince the goal of our study is to evaluate how effectively our techniques deal with access to shared objects, in our study we limit client access to shared data only.\nThis allows us to study the effect our techniques have on cold cache and cache consistency misses and isolate as much as possible the effect of cache capacity misses.\nTo keep the length of our experiments reasonable, we use small caches.\nThe OO7 benchmark generates database modules of predefined size.\nIn our implementation of OO7, the private module size is about 38MB.\nTo make sure that the entire working set fits into the cache we use a single private module and choose a cache size of 40MB for each client.\nThe OO7 database is generated with modules for 3 clients, only one of which is used in our experiments as we explain above.\nThe objects in the database are clustered in 8K pages, which are also the unit of transfer in the fetch requests.\nWe consider two types of transaction workloads in our analysis, read-only and read-write.\nIn OO7 benchmark, read-only transactions use the T1 traversal that performs a depth-first traversal of entire composite part graph.\nWrite transactions use the T2b traversal that is identical to T1 except that it modifies all the atomic parts in a single composite.\nA single transaction includes one traversal and there is no sleep time between transactions.\nBoth read-only and read-write transactions always work with data from the same module.\nClients running read-write transactions don``t modify in every transaction, instead they have a 50% probability of running read-only transactions.\nThe database was stored by a server on a 40GB IBM 7200RPM hard drive, with a 8.5 average seek time and 40 MB/sec data transfer rates.\nIn Base system clients connect directly to the database.\nIn Buddy system clients connect to the redirector that connects to the database.\nWe run the experiments with 1-10 clients in Base, and one or two 1-10 client groups in Buddy.\nThe server, the clients and the redirectors ran on a 850MHz Intel Pentium III processor based PC, 512MB of memory, and Linux Red Hat 6.2.\nThey were connected by a 100Mb/s Ethernet.\nThe server was configured with a 50MB cache (of which 6MB were used for the modified object buffer), the client had a 40MB cache.\nThe experiments ran in Utah experimental testbed emulab.net [1].\n34 Latency [ms] Base Buddy 3 group 5 group 3 group 5 group Fetch 1.3 1.4 2.4 2.6 Commit 2.5 5.5 2.4 5.7 Table 1: Commit and Server fetch Operation Latency [ms] PeerFetch 1.8 - 5.5 \u2212AlertHelper 0.3 - 4.6 \u2212CopyUnswizzle 0.24 \u2212CrossRedirector 0.16 Table 2: Peer fetch 6.3 Basic Costs This section analyzes the basic cost of the requests in the Buddy system during the OO7 runs.\n6.3.1 Redirection Fetch and commit requests in the BuddyCache cross the redirector, a cost not incurred in the Base system.\nFor a request redirected to the server (server fetch) the extra cost of redirection includes a local request from the client to redirector on the way to and from the server.\nWe evaluate this latency overhead indirectly by comparing the measured latency of the Buddy system server fetch or commit request with the measured latency of the corresponding request in the Base system.\nTable 1 shows the latency for the commit and server fetch requests in the Base and Buddy system for 3 client and 5 client groups in a fast local area network.\nAll the numbers were computed by averaging measured request latency over 1000 requests.\nThe measurements show that the redirection cost of crossing the redirector in not very high even in a local area network.\nThe commit cost increases with the number of clients since commits are processed sequentially.\nThe fetch cost does not increase as much because the server cache reduces this cost.\nIn a large system with many groups, however, the server cache becomes less efficient.\nTo evaluate the overheads of the peer fetch, we measure the peer fetch latency (PeerFetch) at the requesting client and break down its component costs.\nIn peer fetch, the cost of the redirection includes, in addition to the local network request cost, the CPU processing latency of crossing the redirector and crossing the helper, the latter including the time to process the help request and the time to copy, and unswizzle the requested page.\nWe directly measured the time to copy and unswizzle the requested page at the helper, (CopyUnswizzle), and timed the crossing times using a null crossing request.\nTable 2 summarizes the latencies that allows us to break down the peer fetch costs.\nCrossRedirector, includes the CPU latency of crossing the redirector plus a local network round-trip and is measured by timing a round-trip null request issued by a client to the redirector.\nAlertHelper, includes the time for the helper to notice the request plus a network roundtrip, and is measured by timing a round-trip null request issued from an auxiliary client to the helper client.\nThe local network latency is fixed and less than 0.1 ms. The AlertHelper latency which includes the elapsed time from the help request arrival until the start of help request processing is highly variable and therefore contributes to the high variability of the PeerFetch time.\nThis is because the client in Buddy system is currently single threaded and therefore only starts processing a help request when blocked waiting for a fetch- or commit reply.\nThis overhead is not inherent to the BuddyCache architecture and could be mitigated by a multi-threaded implementation in a system with pre-emptive scheduling.\n6.3.2 Version Cache The solo commit allows a fast client modifying an object to commit independently of a slow peer.\nThe solo commit mechanism introduces extra processing at the server at transaction validation time, and extra processing at the client at transaction commit time and at update or invalidation processing time.\nThe server side overheads are minimal and consist of a page version number update at commit time, and a version number comparison at transaction validation time.\nThe version cache has an entry only when invalidations or updates arrive out of order.\nThis may happen when a transaction accesses objects in multiple servers.\nOur experiments run in a single server system and therefore, the commit time overhead of version cache management at the client does not contribute in the results presented in the section below.\nTo gauge these client side overheads in a multiple server system, we instrumented the version cache implementation to run with a workload trace that included reordered invalidations and timed the basic operations.\nThe extra client commit time processing includes a version cache lookup operation for each object read by the transaction at commit request preparation time, and a version cache insert operation for each object updated by a transaction at commit reply processing time, but only if the updated page is missing some earlier invalidations or updates.\nIt is important that the extra commit time costs are kept to a minimum since client is synchronously waiting for the commit completion.\nThe measurements show that in the worst case, when a large number of invalidations arrive out of order, and about half of the objects modified by T2a (200 objects) reside on reordered pages, the cost of updating the version cache is 0.6 ms. The invalidation time cost are comparable, but since invalidations and updates are processed in the background this cost is less important for the overall performance.\nWe are currently working on optimizing the version cache implementation to further reduce these costs.\n6.4 Overall Performance This section examines the performance gains seen by an application running OO7 benchmark with a BuddyCache in a wide area network.\n6.4.1 Cold Misses To evaluate the performance gains from avoiding cold misses we compare the cold cache performance of OO7 benchmark running read-only workload in the Buddy and Base systems.\nWe derive the times by timing the execution of the systems in the local area network environment and substituting 40 ms and 80 ms delays for the requests crossing the redirector and the server to estimate the performance in the wide-area-network.\nFigures 7 and 8 show the overall time to complete 1000 cold cache transactions.\nThe numbers were 35 0 5 0 100 150 200 250 Base Buddy Base Buddy Base Buddy 3 Clients 5 Clients 10 Clients [ms] CPU Commit Server Fetch Peer Fetch Figure 7: Breakdown for cold read-only 40ms RTT 0 5 0 100 150 200 250 300 350 400 Base Buddy Base Buddy Base Buddy 3 Clients 5 Clients 10 Clients [ms] CPU Commit Server Fetch Peer Fetch Figure 8: Breakdown for cold read-only 80ms RTT obtained by averaging the overall time of each client in the group.\nThe results show that in a 40 ms network Buddy system reduces significantly the overall time compared to the Base system, providing a 39% improvement in a three client group, 46% improvement in the five client group and 56% improvement in the ten client case.\nThe overall time includes time spent performing client computation, direct fetch requests, peer fetches, and commit requests.\nIn the three client group, Buddy and Base incur almost the same commit cost and therefore the entire performance benefit of Buddy is due to peer fetch avoiding direct fetches.\nIn the five and ten client group the server fetch cost for individual client decreases because with more clients faulting in a fixed size shared module into BuddyCache, each client needs to perform less server fetches.\nFigure 8 shows the overall time and cost break down in the 80 ms network.\nThe BuddyCache provides similar performance improvements as with the 40ms network.\nHigher network latency increases the relative performance advantage provided by peer fetch relative to direct fetch but this benefit is offset by the increased commit times.\nFigure 9 shows the relative latency improvement provided by BuddyCache (computed as the overall measured time difference between Buddy and Base relative to Base) as a -10% 0% 10% 20% 30% 40% 50% 60% 70% 1 5 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 9 0 100 Latency [ms] 3 Clients 3 Clients (Perf model) 5 Clients 5 Clients (Perf model) 10 Clients 10 FEs (perf model) Figure 9: Cold miss benefit 0 2 0 4 0 6 0 8 0 100 120 140 Base Buddy Reader Buddy Writer [ms] CPU Commit Server Fetch Peer Fetch Figure 10: Breakdown for hot read-write 40ms RTT function of network latency, with a fixed server load.\nThe cost of the extra mechanism dominates BuddyCache benefit when network latency is low.\nAt typical Internet latencies 20ms-60ms the benefit increases with latency and levels off around 60ms with significant (up to 62% for ten clients) improvement.\nFigure 9 includes both the measured improvement and the improvement derived using the analytical model.Remarkably, the analytical results predict the measured improvement very closely, albeit being somewhat higher than the empirical values.\nThe main reason why the simplified model works well is it captures the dominant performance component, network latency cost.\n6.4.2 Invalidation Misses To evaluate the performance benefits provided by BuddyCache due to avoided invalidation misses, we compared the hot cache performance of the Base system with two different Buddy system configurations.\nOne of the Buddy system configurations represents a collaborating peer group modifying shared objects (Writer group), the other represents a group where the peers share a read-only interest in the modified objects (Reader group) and the writer resides outside the BuddyCache group.\nIn each of the three systems, a single client runs a readwrite workload (writer) and three other clients run read-only workload (readers).\nBuddy system with one group contain36 0 5 0 100 150 200 250 300 Base Buddy Reader Buddy Writer [ms] CPU Commit Server Fetch Peer Fetch Figure 11: Breakdown for hot read-write 80ms RTT ing a single reader and another group containing two readers and one writer models the Writer group.\nBuddy system with one group containing a single writer and another group running three readers models the Reader group.\nIn Base, one writer and three readers access the server directly.\nThis simple configuration is sufficient to show the impact of BuddyCache techniques.\nFigures 10 and 11 show the overall time to complete 1000 hot cache OO7 read-only transactions.\nWe obtain the numbers by running 2000 transactions to filter out cold misses and then time the next 1000 transactions.\nHere again, the reported numbers are derived from the local area network experiment results.\nThe results show that the BuddyCache reduces significantly the completion time compared to the Base system.\nIn a 40 ms network, the overall time in the Writer group improves by 62% compared to Base.\nThis benefit is due to peer update that avoids all misses due to updates.\nThe overall time in the Reader group improves by 30% and is due to peer fetch that allows a client to access an invalidated object at the cost of a local fetch avoiding the delay of fetching from the server.\nThe latter is an important benefit because it shows that on workloads with updates, peer fetch allows an invalidation-based protocol to provide some of the benefits of update-based protocol.\nNote that the performance benefit delivered by the peer fetch in the Reader group is approximately 50% less than the performance benefit delivered by peer update in the Writer group.\nThis difference is similar in 80ms network.\nFigure 12 shows the relative latency improvement provided by BuddyCache in Buddy Reader and Buddy Writer configurations (computed as the overall time difference between BuddyReader and Base relative to Base, and Buddy Writer and Base relative to Base) in a hot cache experiment as a function of increasing network latency, for fixed server load.\nThe peer update benefit dominates overhead in Writer configuration even in low-latency network (peer update incurs minimal overhead) and offers significant 44-64% improvement for entire latency range.\nThe figure includes both the measured improvement and the improvement derived using the analytical model.\nAs in cold cache experiments, here the analytical results predict the measured improvement closely.\nThe difference is -10% 0% 10% 20% 30% 40% 50% 60% 70% 1 5 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 9 0 100 Latency [ms] Benefits[%] Buddy Reader Buddy Reader (perf model) Buddy Writer Buddy Writer (perf model) Figure 12: Invalidation miss benefit minimal in the ``writer group'', and somewhat higher in the ``reader group'' (consistent with the results in the cold cache experiments).\nAs in cold cache case, the reason why the simplified analytical model works well is because it captures the costs of network latency, the dominant performance cost.\n7.\nCONCLUSION Collaborative applications provide a shared work environment for groups of networked clients collaborating on a common task.\nThey require strong consistency for shared persistent data and efficient access to fine-grained objects.\nThese properties are difficult to provide in wide-area network because of high network latency.\nThis paper described BuddyCache, a new transactional cooperative caching [20, 16, 13, 2, 28] technique that improves the latency of access to shared persistent objects for collaborative strong-consistency applications in high-latency network environments.\nThe technique improves performance yet provides strong correctness and availability properties in the presence of node failures and slow clients.\nBuddyCache uses redirection to fetch missing objects directly from group members caches, and to support peer update, a new lightweight application-level multicast technique that gives group members consistent access to the new data committed within the collaborating group without imposing extra overhead outside the group.\nRedirection, however, can interfere with object availability.\nSolo commit, is a new validation technique that allows a client in a group to commit independently of slow or failed peers.\nIt provides fine-grained validation using inexpensive coarse-grain version information.\nWe have designed and implemented BuddyCache prototype in Thor distributed transactional object storage system [23] and evaluated the benefits and costs of the system over a range of network latencies.\nAnalytical results, supported by the system measurements using the multi-user 007 benchmark indicate, that for typical Internet latencies BuddyCache provides significant performance benefits, e.g. for latencies ranging from 40 to 80 milliseconds round trip time, clients using the BuddyCache can reduce by up to 50% the latency of access to shared objects compared to the clients accessing the repository directly.\nThe main contributions of the paper are: 1.\nextending cooperative caching techniques to support 37 fine-grain strong-consistency access in high-latency environments, 2.\nan implementation of the system prototype that yields strong performance gains over the base system, 3.\nanalytical and measurement based performance evaluation of the costs and benefits of the new techniques capturing the dominant performance cost, high network latency.\n8.\nACKNOWLEDGMENTS We are grateful to Jay Lepreau and the staff of Utah experimental testbed emulab.net [1], especially Leigh Stoller, for hosting the experiments and the help with the testbed.\nWe also thank Jeff Chase, Maurice Herlihy, Butler Lampson and the OOPSLA reviewers for the useful comments that improved this paper.\n9.\nREFERENCES [1] ``emulab.net'', the Utah Network Emulation Facility.\nhttp://www.emulab.net.\n[2] A. Adya, M. Castro, B. Liskov, U. Maheshwari, and L. Shrira.\nFragment Reconstruction: Providing Global Cache Coherence in a Transactional Storage System.\nProceedings of the International Conference on Distributed Computing Systems, May 1997.\n[3] A. Adya, R. Gruber, B. Liskov, and U. Maheshwari.\nEfficient optimistic concurrencty control using loosely synchronized clocks.\nIn Proceedings of the ACM SIGMOD International Conference on Management of Data, May 1995.\n[4] C. Amza, A.L. Cox, S. Dwarkadas, P. Keleher, H. Lu, R. Rajamony, W. Yu, and W. Zwaenepoel.\nTreadmarks: Shared memory computing on networks of workstations.\nIEEE Computer, 29(2), February 1996.\n[5] C. Anderson and A. Karlin.\nTwo Adaptive Hybrid Cache Coherency Protocols.\nIn Proceedings of the 2nd IEEE Symposium on High-Performance Computer Architecture (HPCA ``96), February 1996.\n[6] M. Baker.\nFast Crash Recovery in Distributed File Systems.\nPhD thesis, University of California at Berkeley, 1994.\n[7] P. Cao and C. Liu.\nMaintaining Strong Cache Consistency in the World Wide Web.\nIn 17th International Conference on Distributed Computing Systems., April 1998.\n[8] M. Carey, D. J. Dewitt, C. Kant, and J. F. Naughton.\nA Status Report on the OO7 OODBMS Benchmarking Effort.\nIn Proceedings of OOPSLA, October 1994.\n[9] A. Chankhunthod, M. Schwartz, P. Danzig, K. Worrell, and C. Neerdaels.\nA Hierarchical Internet Object Cache.\nIn USENIX Annual Technical Conference, January 1995.\n[10] J. Chase, S. Gadde, and M. Rabinovich.\nDirectory Structures for Scalable Internet Caches.\nTechnical Report CS-1997-18, Dept. of Computer Science, Duke University, November 1997.\n[11] J. Chase, S. Gadde, and M. Rabinovich.\nNot All Hits Are Created Equal: Cooperative Proxy Caching Over a Wide-Area Network.\nIn Third International WWW Caching Workshop, June 1998.\n[12] D. R. Cheriton and D. Li.\nScalable Web Caching of Frequently Updated Objects using Reliable Multicast.\n2nd USENIX Symposium on Internet Technologies and Systems, October 1999.\n[13] M. D. Dahlin, R. Y. Wang, T. E. Anderson, and D. A. Patterson.\nCooperative caching: Using remote client memory to improve file system performance.\nProceedings of the USENIX Conference on Operating Systems Design and Implementation, November 1994.\n[14] S. Dwarkadas, H. Lu, A.L. Cox, R. Rajamony, and W. Zwaenepoel.\nCombining Compile-Time and Run-Time Support for Efficient Software Distributed Shared Memory.\nIn Proceedings of IEEE, Special Issue on Distributed Shared Memory, March 1999.\n[15] Li Fan, Pei Cao, Jussara Almeida, and Andrei Broder.\nSummary Cache: A Scalable Wide-Area Web Cache Sharing Protocol.\nIn Proceedings of ACM SIGCOMM, September 1998.\n[16] M. Feeley, W. Morgan, F. Pighin, A. Karlin, and H. Levy.\nImplementing Global Memory Management in a Workstation Cluster.\nProceedings of the 15th ACM Symposium on Operating Systems Principles, December 1995.\n[17] M. J. Feeley, J. S. Chase, V. R. Narasayya, and H. M. Levy.\nIntegrating Coherency and Recoverablity in Distributed Systems.\nIn Proceedings of the First Usenix Symposium on Operating sustems Design and Implementation, May 1994.\n[18] P. Ferreira and M. Shapiro et al..\nPerDiS: Design, Implementation, and Use of a PERsistent DIstributed Store.\nIn Recent Advances in Distributed Systems, LNCS 1752, Springer-Verlag, 1999.\n[19] M. J. Franklin, M. Carey, and M. Livny.\nTransactional Client-Server Cache Consistency: Alternatives and Performance.\nIn ACM Transactions on Database Systems, volume 22, pages 315-363, September 1997.\n[20] Michael Franklin, Michael Carey, and Miron Livny.\nGlobal Memory Management for Client-Server DBMS Architectures.\nIn Proceedings of the 19th Intl..\nConference on Very Large Data Bases (VLDB), August 1992.\n[21] S. Ghemawat.\nThe Modified Object Buffer: A Storage Management Technique for Object-Oriented Databases.\nPhD thesis, Massachusetts Institute of Technology, 1997.\n[22] L. Kawell, S. Beckhardt, T. Halvorsen, R. Ozzie, and I. Greif.\nReplicated document management in a group communication system.\nIn Proceedings of the ACM CSCW Conference, September 1988.\n[23] B. Liskov, M. Castro, L. Shrira, and A. Adya.\nProviding Persistent Objects in Distributed Systems.\nIn Proceedings of the 13th European Conference on Object-Oriented Programming (ECOOP ``99), June 1999.\n[24] A. Muthitacharoen, B. Chen, and D. Mazieres.\nA Low-bandwidth Network File System.\nIn 18th ACM Symposium on Operating Systems Principles, October 2001.\n[25] B. Oki and B. Liskov.\nViewstamped Replication: A New Primary Copy Method to Support Highly-Available Distributed Systems.\nIn Proc.\nof ACM Symposium on Principles of Distributed 38 Computing, August 1988.\n[26] J. O``Toole and L. Shrira.\nOpportunistic Log: Efficient Installation Reads in a Reliable Object Server.\nIn Usenix Symposium on Operation Systems Design and Implementation, November 1994.\n[27] D. Pendarakis, S. Shi, and D. Verma.\nALMI: An Application Level Multicast Infrastructure.\nIn 3rd USENIX Symposium on Internet Technologies and Systems, March 2001.\n[28] P. Sarkar and J. Hartman.\nEfficient Cooperative Caching Using Hints.\nIn Usenix Symposium on Operation Systems Design and Implementation, October 1996.\n[29] A. M. Vahdat, P. C. Eastham, and T. E Anderson.\nWebFS: A Global Cache Coherent File System.\nTechnical report, University of California, Berkeley, 1996.\n[30] A. Wolman, G. Voelker, N. Sharma, N. Cardwell, A. Karlin, and H. Levy.\nOn the Scale and Performance of Cooperative Web Proxy Caching.\nIn 17th ACM Symposium on Operating Systems Principles, December 1999.\n[31] J. Yin, L. Alvisi, M. Dahlin, and C. Lin.\nHierarchical Cache Consistency in a WAN.\nIn USENIX Symposium on Internet Technologies and Systems, October 1999.\n[32] J. Yin, L. Alvisi, M. Dahlin, and C. Lin.\nVolume Leases for Consistency in Large-Scale Systems.\nIEEE Transactions on Knowledge and Data Engineering, 11(4), July/August 1999.\n[33] M. Zaharioudakis, M. J. Carey, and M. J. Franklin.\nAdaptive, Fine-Grained Sharing in a Client-Server OODBMS: A Callback-Based Approach.\nACM Transactions on Database Systems, 22:570-627, December 1997.\n10.\nAPPENDIX This appendix outlines the BuddyCache failover protocol.\nTo accommodate heterogeneous clients including resourcepoor hand-helds we do not require the availability of persistent storage in the BuddyCache peer group.\nThe BuddyCache design assumes that the client caches and the redirector data structures do not survive node failures.\nA failure of a client or a redirector is detected by a membership protocol that exchanges periodic I am alive messages between group members and initiates a failover protocol.\nThe failover determines the active group participants, re-elects a redirector if needed, reinitializes the BuddyCache data structures in the new configuration and restarts the protocol.\nThe group reconfiguration protocol is similar to the one presented in [25].\nHere we describe how the failover manages the BuddyCache state.\nTo restart the BuddyCache protocol, the failover needs to resynchronize the redirector page directory and clientserver request forwarding so that active clients can continue running transactions using their caches.\nIn the case of a client failure, the failover removes the crashed client pages from the directory.\nAny response to an earlier request initiated by the failed client is ignored except a commit reply, in which case the redirector distributes the retained committed updates to active clients caching the modified pages.\nIn the case of a redirector failure, the failover protocol reinitializes sessions with the servers and clients, and rebuilds the page directory using a protocol similar to one in [6].\nThe newly restarted redirector asks the active group members for the list of pages they are caching and the status of these pages, i.e. whether the pages are complete or incomplete.\nRequests outstanding at the redirector at the time of the crash may be lost.\nA lost fetch request will time out at the client and will be retransmitted.\nA transaction running at the client during a failover and committing after the failover is treated as a regular transaction, a transaction trying to commit during a failover is aborted by the failover protocol.\nA client will restart the transaction and the commit request will be retransmitted after the failover.\nInvalidations, updates or collected update acknowledgements lost at the crashed redirector could prevent the garbage collection of pending invalidations at the servers or the vcache in the clients.\nTherefore, servers detecting a redirector crash retransmit unacknowledged invalidations and commit replies.\nUnique version numbers in invalidations and updates ensure that duplicate retransmitted requests are detected and discarded.\nSince the transaction validation procedure depends on the cache coherence protocol to ensure that transactions do not read stale data, we now need to argue that BuddyCache failover protocol does not compromise the correctness of the validation procedure.\nRecall that BuddyCache transaction validation uses two complementary mechanisms, page version numbers and invalidation acknowledgements from the clients, to check that a transaction has read up-to-date data.\nThe redirector-based invalidation (and update) acknowledgement propagation ensures the following invariant.\nWhen a server receives an acknowledgement for an object o modification (invalidation or update) from a client group, any client in the group caching the object o has either installed the latest value of object o, or has invalidated o. Therefore, if a server receives a commit request from a client for a transaction T reading an object o after a failover in the client group, and the server has no unacknowledged invalidation for o pending for this group, the version of the object read by the transaction T is up-to-date independently of client or redirector failures.\nNow consider the validation using version numbers.\nThe transaction commit record contains a version number for each object read by the transaction.\nThe version number protocol maintains the invariant V P that ensures that the value of object o read by the transaction corresponds to the highest version number for o received by the client.\nThe invariant holds since the client never applies an earlier modification after a later modification has been received.\nRetransmition of invalidations and updates maintains this invariant.\nThe validation procedure checks that the version number in the commit record matches the version number in the unacknowledged outstanding invalidation.\nIt is straightforward to see that since this check is an end-to-end client-server check it is unaffected by client or redirector failure.\nThe failover protocol has not been implemented yet.\n39", "lvl-3": "BuddyCache : High-Performance Object Storage for Collaborative Strong-Consistency Applications in a WAN *\nABSTRACT\nCollaborative applications provide a shared work environment for groups of networked clients collaborating on a common task .\nThey require strong consistency for shared persistent data and efficient access to fine-grained objects .\nThese properties are difficult to provide in wide-area networks because of high network latency .\nBuddyCache is a new transactional caching approach that improves the latency of access to shared persistent objects for collaborative strong-consistency applications in high-latency network environments .\nThe challenge is to improve performance while providing the correctness and availability properties of a transactional caching protocol in the presence of node failures and slow peers .\nWe have implemented a BuddyCache prototype and evaluated its performance .\nAnalytical results , confirmed by measurements of the BuddyCache prototype using the multiuser 007 benchmark indicate that for typical Internet latencies , e.g. ranging from 40 to 80 milliseconds round trip time to the storage server , peers using BuddyCache can reduce by up to 50 % the latency of access to shared objects compared to accessing the remote servers directly .\n1 .\nINTRODUCTION\nImprovements in network connectivity erode the distinction between local and wide-area computing and , increasingly , users expect their work environment to follow them wherever they go .\nNevertheless , distributed applications may perform poorly in wide-area network environments .\nNetwork bandwidth problems will improve in the foreseeable future , but improvement in network latency is fundamentally limited .\nBuddyCache is a new object caching technique that addresses the network latency problem for collaborative applications in wide-area network environment .\nCollaborative applications provide a shared work environment for groups of networked users collaborating on a common task , for example a team of engineers jointly overseeing a construction project .\nStrong-consistency collaborative applications , for example CAD systems , use client/server transactional object storage systems to ensure consistent access to shared persistent data .\nUp to now however , users have rarely considered running consistent network storage systems over wide-area networks as performance would be unacceptable [ 24 ] .\nFor transactional storage systems , the high cost of wide-area network interactions to maintain data consistency is the main cost limiting the performance and therefore , in wide-area network environments , collaborative applications have been adapted to use weaker consistency storage systems [ 22 ] .\nAdapting an application to use weak consistency storage system requires significant effort since the application needs to be rewritten to deal with a different storage system semantics .\nIf shared persistent objects could be accessed with low-latency , a new field of distributed strong-consistency applications could be opened .\nCooperative web caching [ 10 , 11 , 15 ] is a well-known approach to reducing client interaction with a server by allowing one client to obtain missing objects from a another client instead of the server .\nCollaborative applications seem a particularly good match to benefit from this approach since one of the hard problems , namely determining what objects are cached where , becomes easy in small groups typical of collaborative settings .\nHowever , cooperative web caching techniques do not provide two important properties needed by collaborative applications , strong consistency and efficient\naccess to fine-grained objects .\nCooperative object caching systems [ 2 ] provide these properties .\nHowever , they rely on interaction with the server to provide fine-grain cache coherence that avoids the problem of false sharing when accesses to unrelated objects appear to conflict because they occur on the same physical page .\nInteraction with the server increases latency .\nThe contribution of this work is extending cooperative caching techniques to provide strong consistency and efficient access to fine-grain objects in wide-area environments .\nConsider a team of engineers employed by a construction company overseeing a remote project and working in a shed at the construction site .\nThe engineers use a collaborative CAD application to revise and update complex project design documents .\nThe shared documents are stored in transactional repository servers at the company home site .\nThe engineers use workstations running repository clients .\nThe workstations are interconnected by a fast local Ethernet but the network connection to the home repository servers is slow .\nTo improve access latency , clients fetch objects from repository servers and cache and access them locally .\nA coherence protocol ensures that client caches remain consistent when objects are modified .\nThe performance problem facing the collaborative application is coordinating with the servers consistent access to shared objects .\nWith BuddyCache , a group of close-by collaborating clients , connected to storage repository via a high-latency link , can avoid interactions with the server if needed objects , updates or coherency information are available in some client in the group .\nBuddyCache presents two main technical challenges .\nOne challenge is how to provide efficient access to shared finegrained objects in the collaborative group without imposing performance overhead on the entire caching system .\nThe other challenge is to support fine-grain cache coherence in the presence of slow and failed nodes .\nBuddyCache uses a '' redirection '' approach similar to one used in cooperative web caching systems [ 11 ] .\nA redirector server , interposed between the clients and the remote servers , runs on the same network as the collaborating group and , when possible , replaces the function of the remote servers .\nIf the client request can not be served locally , the redirector forwards it to a remote server .\nWhen one of the clients in the group fetches a shared object from the repository , the object is likely to be needed by other clients .\nBuddyCache redirects subsequent requests for this object to the caching client .\nSimilarly , when a client creates or modifies a shared object , the new data is likely to be of potential interest to all group members .\nBuddyCache uses redirection to support peer update , a lightweight '' application-level multicast '' technique that provides group members with consistent access to the new data committed within the collaborating group without imposing extra overhead outside the group .\nNevertheless , in a transactional system , redirection interferes with shared object availability .\nSolo commit , is a validation technique used by BuddyCache to avoid the undesirable client dependencies that reduce object availability when some client nodes in the group are slow , or clients fail independently .\nA salient feature of solo commit is supporting fine-grained validation using inexpensive coarse-grained coherence information .\nSince redirection supports the performance benefits of reducing interaction with the server but introduces extra processing cost due to availability mechanisms and request forwarding , this raises the question is the '' cure '' worse than the '' disease '' ?\nWe designed and implemented a BuddyCache prototype and studied its performance benefits and costs using analytical modeling and system measurements .\nWe compared the storage system performance with and without BuddyCache and considered how the cost-benefit balance is affected by network latency .\nAnalytical results , supported by measurements based on the multi-user 007 benchmark , indicate that for typical Internet latencies BuddyCache provides significant performance benefits , e.g. for latencies ranging from 40 to 80 milliseconds round trip time , clients using the BuddyCache can reduce by up to 50 % the latency of access to shared objects compared to the clients accessing the repository directly .\nThese strong performance gains could make transactional object storage systems more attractive for collaborative applications in wide-area environments .\n2 .\nRELATED WORK\nCooperative caching techniques [ 20 , 16 , 13 , 2 , 28 ] provide access to client caches to avoid high disk access latency in an environment where servers and clients run on a fast local area network .\nThese techniques use the server to provide redirection and do not consider issues of high network latency .\nMultiprocessor systems and distributed shared memory systems [ 14 , 4 , 17 , 18 , 5 ] use fine-grain coherence techniques to avoid the performance penalty of false sharing but do not address issues of availability when nodes fail .\nCooperative Web caching techniques , ( e.g. [ 11 , 15 ] ) investigate issues of maintaining a directory of objects cached in nearby proxy caches in wide-area environment , using distributed directory protocols for tracking cache changes .\nThis work does not consider issues of consistent concurrent updates to shared fine-grained objects .\nCheriton and Li propose MMO [ 12 ] a hybrid web coherence protocol that combines invalidations with updates using multicast delivery channels and receiver-reliable protocol , exploiting locality in a way similar to BuddyCache .\nThis multicast transport level solution is geared to the single writer semantics of web objects .\nIn contrast , BuddyCache uses '' application level '' multicast and a sender-reliable coherence protocol to provide similar access latency improvements for transactional objects .\nApplication level multicast solution in a middle-ware system was described by Pendarakis , Shi and Verma in [ 27 ] .\nThe schema supports small multi-sender groups appropriate for collaborative applications and considers coherence issues in the presence of failures but does not support strong consistency or fine-grained sharing .\nYin , Alvisi , Dahlin and Lin [ 32 , 31 ] present a hierarchical WAN cache coherence scheme .\nThe protocol uses leases to provide fault-tolerant call-backs and takes advantage of nearby caches to reduce the cost of lease extensions .\nThe study uses simulation to investigate latency and fault tolerance issues in hierarchical avoidance-based coherence scheme .\nIn contrast , our work uses implementation and analysis to evaluate the costs and benefits of redirection and fine grained updates in an optimistic system .\nAnderson , Eastham and Vahdat in WebFS [ 29 ] present a global file system coherence protocol that allows clients to choose\non per file basis between receiving updates or invalidations .\nUpdates and invalidations are multicast on separate channels and clients subscribe to one of the channels .\nThe protocol exploits application specific methods e.g. last-writer-wins policy for broadcast applications , to deal with concurrent updates but is limited to file systems .\nMazieres studies a bandwidth saving technique [ 24 ] to detect and avoid repeated file fragment transfers across a WAN when fragments are available in a local cache .\nBuddyCache provides similar bandwidth improvements when objects are available in the group cache .\n3 .\nBUDDYCACHE\n3.1 Cache Coherence\n3.2 Light-weight Peer Update\n3.3 Solo commit\n3.4 Group Configuration\n4 .\nIMPLEMENTATION\n4.1 Base Storage System\n4.2 Base Cache Coherence\n4.3 Redirection\n4.4 Peer Update\n4.5 Vcache\n5 .\nBUDDYCACHE FAILOVER\n6 .\nPERFORMANCE EVALUATION\n6.1 Analysis\n6.1.1 The Model\n6.2 Experimental Setup\n6.3 Basic Costs\n6.3.1 Redirection\n6.3.2 Version Cache\n6.4 Overall Performance\n6.4.1 Cold Misses\n6.4.2 Invalidation Misses\n7 .\nCONCLUSION\nCollaborative applications provide a shared work environment for groups of networked clients collaborating on a common task .\nThey require strong consistency for shared persistent data and efficient access to fine-grained objects .\nThese properties are difficult to provide in wide-area network because of high network latency .\nThis paper described BuddyCache , a new transactional cooperative caching [ 20 , 16 , 13 , 2 , 28 ] technique that improves the latency of access to shared persistent objects for collaborative strong-consistency applications in high-latency network environments .\nThe technique improves performance yet provides strong correctness and availability properties in the presence of node failures and slow clients .\nBuddyCache uses redirection to fetch missing objects directly from group members caches , and to support peer update , a new lightweight '' application-level multicast '' technique that gives group members consistent access to the new data committed within the collaborating group without imposing extra overhead outside the group .\nRedirection , however , can interfere with object availability .\nSolo commit , is a new validation technique that allows a client in a group to commit independently of slow or failed peers .\nIt provides fine-grained validation using inexpensive coarse-grain version information .\nWe have designed and implemented BuddyCache prototype in Thor distributed transactional object storage system [ 23 ] and evaluated the benefits and costs of the system over a range of network latencies .\nAnalytical results , supported by the system measurements using the multi-user 007 benchmark indicate , that for typical Internet latencies BuddyCache provides significant performance benefits , e.g. for latencies ranging from 40 to 80 milliseconds round trip time , clients using the BuddyCache can reduce by up to 50 % the latency of access to shared objects compared to the clients accessing the repository directly .\nThe main contributions of the paper are :\nfine-grain strong-consistency access in high-latency environments , 2 .\nan implementation of the system prototype that yields strong performance gains over the base system , 3 .\nanalytical and measurement based performance evaluation of the costs and benefits of the new techniques capturing the dominant performance cost , high network latency .", "lvl-4": "BuddyCache : High-Performance Object Storage for Collaborative Strong-Consistency Applications in a WAN *\nABSTRACT\nCollaborative applications provide a shared work environment for groups of networked clients collaborating on a common task .\nThey require strong consistency for shared persistent data and efficient access to fine-grained objects .\nThese properties are difficult to provide in wide-area networks because of high network latency .\nBuddyCache is a new transactional caching approach that improves the latency of access to shared persistent objects for collaborative strong-consistency applications in high-latency network environments .\nThe challenge is to improve performance while providing the correctness and availability properties of a transactional caching protocol in the presence of node failures and slow peers .\nWe have implemented a BuddyCache prototype and evaluated its performance .\nAnalytical results , confirmed by measurements of the BuddyCache prototype using the multiuser 007 benchmark indicate that for typical Internet latencies , e.g. ranging from 40 to 80 milliseconds round trip time to the storage server , peers using BuddyCache can reduce by up to 50 % the latency of access to shared objects compared to accessing the remote servers directly .\n1 .\nINTRODUCTION\nNevertheless , distributed applications may perform poorly in wide-area network environments .\nNetwork bandwidth problems will improve in the foreseeable future , but improvement in network latency is fundamentally limited .\nBuddyCache is a new object caching technique that addresses the network latency problem for collaborative applications in wide-area network environment .\nCollaborative applications provide a shared work environment for groups of networked users collaborating on a common task , for example a team of engineers jointly overseeing a construction project .\nStrong-consistency collaborative applications , for example CAD systems , use client/server transactional object storage systems to ensure consistent access to shared persistent data .\nUp to now however , users have rarely considered running consistent network storage systems over wide-area networks as performance would be unacceptable [ 24 ] .\nFor transactional storage systems , the high cost of wide-area network interactions to maintain data consistency is the main cost limiting the performance and therefore , in wide-area network environments , collaborative applications have been adapted to use weaker consistency storage systems [ 22 ] .\nAdapting an application to use weak consistency storage system requires significant effort since the application needs to be rewritten to deal with a different storage system semantics .\nIf shared persistent objects could be accessed with low-latency , a new field of distributed strong-consistency applications could be opened .\nCooperative web caching [ 10 , 11 , 15 ] is a well-known approach to reducing client interaction with a server by allowing one client to obtain missing objects from a another client instead of the server .\nHowever , cooperative web caching techniques do not provide two important properties needed by collaborative applications , strong consistency and efficient\naccess to fine-grained objects .\nCooperative object caching systems [ 2 ] provide these properties .\nHowever , they rely on interaction with the server to provide fine-grain cache coherence that avoids the problem of false sharing when accesses to unrelated objects appear to conflict because they occur on the same physical page .\nInteraction with the server increases latency .\nThe contribution of this work is extending cooperative caching techniques to provide strong consistency and efficient access to fine-grain objects in wide-area environments .\nThe engineers use a collaborative CAD application to revise and update complex project design documents .\nThe shared documents are stored in transactional repository servers at the company home site .\nThe engineers use workstations running repository clients .\nThe workstations are interconnected by a fast local Ethernet but the network connection to the home repository servers is slow .\nTo improve access latency , clients fetch objects from repository servers and cache and access them locally .\nA coherence protocol ensures that client caches remain consistent when objects are modified .\nThe performance problem facing the collaborative application is coordinating with the servers consistent access to shared objects .\nWith BuddyCache , a group of close-by collaborating clients , connected to storage repository via a high-latency link , can avoid interactions with the server if needed objects , updates or coherency information are available in some client in the group .\nBuddyCache presents two main technical challenges .\nOne challenge is how to provide efficient access to shared finegrained objects in the collaborative group without imposing performance overhead on the entire caching system .\nThe other challenge is to support fine-grain cache coherence in the presence of slow and failed nodes .\nBuddyCache uses a '' redirection '' approach similar to one used in cooperative web caching systems [ 11 ] .\nA redirector server , interposed between the clients and the remote servers , runs on the same network as the collaborating group and , when possible , replaces the function of the remote servers .\nIf the client request can not be served locally , the redirector forwards it to a remote server .\nWhen one of the clients in the group fetches a shared object from the repository , the object is likely to be needed by other clients .\nBuddyCache redirects subsequent requests for this object to the caching client .\nSimilarly , when a client creates or modifies a shared object , the new data is likely to be of potential interest to all group members .\nBuddyCache uses redirection to support peer update , a lightweight '' application-level multicast '' technique that provides group members with consistent access to the new data committed within the collaborating group without imposing extra overhead outside the group .\nNevertheless , in a transactional system , redirection interferes with shared object availability .\nSolo commit , is a validation technique used by BuddyCache to avoid the undesirable client dependencies that reduce object availability when some client nodes in the group are slow , or clients fail independently .\nA salient feature of solo commit is supporting fine-grained validation using inexpensive coarse-grained coherence information .\nWe designed and implemented a BuddyCache prototype and studied its performance benefits and costs using analytical modeling and system measurements .\nWe compared the storage system performance with and without BuddyCache and considered how the cost-benefit balance is affected by network latency .\nThese strong performance gains could make transactional object storage systems more attractive for collaborative applications in wide-area environments .\n2 .\nRELATED WORK\nCooperative caching techniques [ 20 , 16 , 13 , 2 , 28 ] provide access to client caches to avoid high disk access latency in an environment where servers and clients run on a fast local area network .\nThese techniques use the server to provide redirection and do not consider issues of high network latency .\nCooperative Web caching techniques , ( e.g. [ 11 , 15 ] ) investigate issues of maintaining a directory of objects cached in nearby proxy caches in wide-area environment , using distributed directory protocols for tracking cache changes .\nThis work does not consider issues of consistent concurrent updates to shared fine-grained objects .\nThis multicast transport level solution is geared to the single writer semantics of web objects .\nIn contrast , BuddyCache uses '' application level '' multicast and a sender-reliable coherence protocol to provide similar access latency improvements for transactional objects .\nApplication level multicast solution in a middle-ware system was described by Pendarakis , Shi and Verma in [ 27 ] .\nThe schema supports small multi-sender groups appropriate for collaborative applications and considers coherence issues in the presence of failures but does not support strong consistency or fine-grained sharing .\nThe protocol uses leases to provide fault-tolerant call-backs and takes advantage of nearby caches to reduce the cost of lease extensions .\nThe study uses simulation to investigate latency and fault tolerance issues in hierarchical avoidance-based coherence scheme .\nIn contrast , our work uses implementation and analysis to evaluate the costs and benefits of redirection and fine grained updates in an optimistic system .\nAnderson , Eastham and Vahdat in WebFS [ 29 ] present a global file system coherence protocol that allows clients to choose\non per file basis between receiving updates or invalidations .\nUpdates and invalidations are multicast on separate channels and clients subscribe to one of the channels .\nThe protocol exploits application specific methods e.g. last-writer-wins policy for broadcast applications , to deal with concurrent updates but is limited to file systems .\nBuddyCache provides similar bandwidth improvements when objects are available in the group cache .\n7 .\nCONCLUSION\nCollaborative applications provide a shared work environment for groups of networked clients collaborating on a common task .\nThey require strong consistency for shared persistent data and efficient access to fine-grained objects .\nThese properties are difficult to provide in wide-area network because of high network latency .\nThis paper described BuddyCache , a new transactional cooperative caching [ 20 , 16 , 13 , 2 , 28 ] technique that improves the latency of access to shared persistent objects for collaborative strong-consistency applications in high-latency network environments .\nThe technique improves performance yet provides strong correctness and availability properties in the presence of node failures and slow clients .\nRedirection , however , can interfere with object availability .\nSolo commit , is a new validation technique that allows a client in a group to commit independently of slow or failed peers .\nIt provides fine-grained validation using inexpensive coarse-grain version information .\nWe have designed and implemented BuddyCache prototype in Thor distributed transactional object storage system [ 23 ] and evaluated the benefits and costs of the system over a range of network latencies .\nfine-grain strong-consistency access in high-latency environments , 2 .\nan implementation of the system prototype that yields strong performance gains over the base system , 3 .\nanalytical and measurement based performance evaluation of the costs and benefits of the new techniques capturing the dominant performance cost , high network latency .", "lvl-2": "BuddyCache : High-Performance Object Storage for Collaborative Strong-Consistency Applications in a WAN *\nABSTRACT\nCollaborative applications provide a shared work environment for groups of networked clients collaborating on a common task .\nThey require strong consistency for shared persistent data and efficient access to fine-grained objects .\nThese properties are difficult to provide in wide-area networks because of high network latency .\nBuddyCache is a new transactional caching approach that improves the latency of access to shared persistent objects for collaborative strong-consistency applications in high-latency network environments .\nThe challenge is to improve performance while providing the correctness and availability properties of a transactional caching protocol in the presence of node failures and slow peers .\nWe have implemented a BuddyCache prototype and evaluated its performance .\nAnalytical results , confirmed by measurements of the BuddyCache prototype using the multiuser 007 benchmark indicate that for typical Internet latencies , e.g. ranging from 40 to 80 milliseconds round trip time to the storage server , peers using BuddyCache can reduce by up to 50 % the latency of access to shared objects compared to accessing the remote servers directly .\n1 .\nINTRODUCTION\nImprovements in network connectivity erode the distinction between local and wide-area computing and , increasingly , users expect their work environment to follow them wherever they go .\nNevertheless , distributed applications may perform poorly in wide-area network environments .\nNetwork bandwidth problems will improve in the foreseeable future , but improvement in network latency is fundamentally limited .\nBuddyCache is a new object caching technique that addresses the network latency problem for collaborative applications in wide-area network environment .\nCollaborative applications provide a shared work environment for groups of networked users collaborating on a common task , for example a team of engineers jointly overseeing a construction project .\nStrong-consistency collaborative applications , for example CAD systems , use client/server transactional object storage systems to ensure consistent access to shared persistent data .\nUp to now however , users have rarely considered running consistent network storage systems over wide-area networks as performance would be unacceptable [ 24 ] .\nFor transactional storage systems , the high cost of wide-area network interactions to maintain data consistency is the main cost limiting the performance and therefore , in wide-area network environments , collaborative applications have been adapted to use weaker consistency storage systems [ 22 ] .\nAdapting an application to use weak consistency storage system requires significant effort since the application needs to be rewritten to deal with a different storage system semantics .\nIf shared persistent objects could be accessed with low-latency , a new field of distributed strong-consistency applications could be opened .\nCooperative web caching [ 10 , 11 , 15 ] is a well-known approach to reducing client interaction with a server by allowing one client to obtain missing objects from a another client instead of the server .\nCollaborative applications seem a particularly good match to benefit from this approach since one of the hard problems , namely determining what objects are cached where , becomes easy in small groups typical of collaborative settings .\nHowever , cooperative web caching techniques do not provide two important properties needed by collaborative applications , strong consistency and efficient\naccess to fine-grained objects .\nCooperative object caching systems [ 2 ] provide these properties .\nHowever , they rely on interaction with the server to provide fine-grain cache coherence that avoids the problem of false sharing when accesses to unrelated objects appear to conflict because they occur on the same physical page .\nInteraction with the server increases latency .\nThe contribution of this work is extending cooperative caching techniques to provide strong consistency and efficient access to fine-grain objects in wide-area environments .\nConsider a team of engineers employed by a construction company overseeing a remote project and working in a shed at the construction site .\nThe engineers use a collaborative CAD application to revise and update complex project design documents .\nThe shared documents are stored in transactional repository servers at the company home site .\nThe engineers use workstations running repository clients .\nThe workstations are interconnected by a fast local Ethernet but the network connection to the home repository servers is slow .\nTo improve access latency , clients fetch objects from repository servers and cache and access them locally .\nA coherence protocol ensures that client caches remain consistent when objects are modified .\nThe performance problem facing the collaborative application is coordinating with the servers consistent access to shared objects .\nWith BuddyCache , a group of close-by collaborating clients , connected to storage repository via a high-latency link , can avoid interactions with the server if needed objects , updates or coherency information are available in some client in the group .\nBuddyCache presents two main technical challenges .\nOne challenge is how to provide efficient access to shared finegrained objects in the collaborative group without imposing performance overhead on the entire caching system .\nThe other challenge is to support fine-grain cache coherence in the presence of slow and failed nodes .\nBuddyCache uses a '' redirection '' approach similar to one used in cooperative web caching systems [ 11 ] .\nA redirector server , interposed between the clients and the remote servers , runs on the same network as the collaborating group and , when possible , replaces the function of the remote servers .\nIf the client request can not be served locally , the redirector forwards it to a remote server .\nWhen one of the clients in the group fetches a shared object from the repository , the object is likely to be needed by other clients .\nBuddyCache redirects subsequent requests for this object to the caching client .\nSimilarly , when a client creates or modifies a shared object , the new data is likely to be of potential interest to all group members .\nBuddyCache uses redirection to support peer update , a lightweight '' application-level multicast '' technique that provides group members with consistent access to the new data committed within the collaborating group without imposing extra overhead outside the group .\nNevertheless , in a transactional system , redirection interferes with shared object availability .\nSolo commit , is a validation technique used by BuddyCache to avoid the undesirable client dependencies that reduce object availability when some client nodes in the group are slow , or clients fail independently .\nA salient feature of solo commit is supporting fine-grained validation using inexpensive coarse-grained coherence information .\nSince redirection supports the performance benefits of reducing interaction with the server but introduces extra processing cost due to availability mechanisms and request forwarding , this raises the question is the '' cure '' worse than the '' disease '' ?\nWe designed and implemented a BuddyCache prototype and studied its performance benefits and costs using analytical modeling and system measurements .\nWe compared the storage system performance with and without BuddyCache and considered how the cost-benefit balance is affected by network latency .\nAnalytical results , supported by measurements based on the multi-user 007 benchmark , indicate that for typical Internet latencies BuddyCache provides significant performance benefits , e.g. for latencies ranging from 40 to 80 milliseconds round trip time , clients using the BuddyCache can reduce by up to 50 % the latency of access to shared objects compared to the clients accessing the repository directly .\nThese strong performance gains could make transactional object storage systems more attractive for collaborative applications in wide-area environments .\n2 .\nRELATED WORK\nCooperative caching techniques [ 20 , 16 , 13 , 2 , 28 ] provide access to client caches to avoid high disk access latency in an environment where servers and clients run on a fast local area network .\nThese techniques use the server to provide redirection and do not consider issues of high network latency .\nMultiprocessor systems and distributed shared memory systems [ 14 , 4 , 17 , 18 , 5 ] use fine-grain coherence techniques to avoid the performance penalty of false sharing but do not address issues of availability when nodes fail .\nCooperative Web caching techniques , ( e.g. [ 11 , 15 ] ) investigate issues of maintaining a directory of objects cached in nearby proxy caches in wide-area environment , using distributed directory protocols for tracking cache changes .\nThis work does not consider issues of consistent concurrent updates to shared fine-grained objects .\nCheriton and Li propose MMO [ 12 ] a hybrid web coherence protocol that combines invalidations with updates using multicast delivery channels and receiver-reliable protocol , exploiting locality in a way similar to BuddyCache .\nThis multicast transport level solution is geared to the single writer semantics of web objects .\nIn contrast , BuddyCache uses '' application level '' multicast and a sender-reliable coherence protocol to provide similar access latency improvements for transactional objects .\nApplication level multicast solution in a middle-ware system was described by Pendarakis , Shi and Verma in [ 27 ] .\nThe schema supports small multi-sender groups appropriate for collaborative applications and considers coherence issues in the presence of failures but does not support strong consistency or fine-grained sharing .\nYin , Alvisi , Dahlin and Lin [ 32 , 31 ] present a hierarchical WAN cache coherence scheme .\nThe protocol uses leases to provide fault-tolerant call-backs and takes advantage of nearby caches to reduce the cost of lease extensions .\nThe study uses simulation to investigate latency and fault tolerance issues in hierarchical avoidance-based coherence scheme .\nIn contrast , our work uses implementation and analysis to evaluate the costs and benefits of redirection and fine grained updates in an optimistic system .\nAnderson , Eastham and Vahdat in WebFS [ 29 ] present a global file system coherence protocol that allows clients to choose\non per file basis between receiving updates or invalidations .\nUpdates and invalidations are multicast on separate channels and clients subscribe to one of the channels .\nThe protocol exploits application specific methods e.g. last-writer-wins policy for broadcast applications , to deal with concurrent updates but is limited to file systems .\nMazieres studies a bandwidth saving technique [ 24 ] to detect and avoid repeated file fragment transfers across a WAN when fragments are available in a local cache .\nBuddyCache provides similar bandwidth improvements when objects are available in the group cache .\n3 .\nBUDDYCACHE\nHigh network latency imposes performance penalty for transactional applications accessing shared persistent objects in wide-area network environment .\nThis section describes the BuddyCache approach for reducing the network latency penalty in collaborative applications and explains the main design decisions .\nWe consider a system in which a distributed transactional object repository stores objects in highly reliable servers , perhaps outsourced in data-centers connected via high-bandwidth reliable networks .\nCollaborating clients interconnected via a fast local network , connect via high-latency , possibly satellite , links to the servers at the data-centers to access shared persistent objects .\nThe servers provide disk storage for the persistent objects .\nA persistent object is owned by a single server .\nObjects may be small ( order of 100 bytes for programming language objects [ 23 ] ) .\nTo amortize the cost of disk and network transfer objects are grouped into physical pages .\nTo improve object access latency , clients fetch the objects from the servers and cache and access them locally .\nA transactional cache coherence protocol runs at clients and servers to ensure that client caches remain consistent when objects are modified .\nThe performance problem facing the collaborating client group is the high latency of coordinating consistent access to the shared objects .\nBuddyCache architecture is based on a request redirection server , interposed between the clients and the remote servers .\nThe interposed server ( the redirector ) runs on the same network as the collaborative group and , when possible , replaces the function of the remote servers .\nIf the client request can be served locally , the interaction with the server is avoided .\nIf the client request can not be served locally , redirector forwards it to a remote server .\nRedirection approach has been used to improve the performance of web caching protocols .\nBuddyCache redirector supports the correctness , availability and fault-tolerance properties of transactional caching protocol [ 19 ] .\nThe correctness property ensures onecopy serializability of the objects committed by the client transactions .\nThe availability and fault-tolerance properties ensure that a crashed or slow client does not disrupt any other client 's access to persistent objects .\nThe three types of client server interactions in a transactional caching protocol are the commit of a transaction , the fetch of an object missing in a client cache , and the exchange of cache coherence information .\nBuddyCache avoids interactions with the server when a missing object , or cache coherence information needed by a client is available within the collaborating group .\nThe redirector always interacts with the servers at commit time because only storage servers provide transaction durability in a way that ensures committed\nFigure 1 : BuddyCache .\ndata remains available in the presence of client or redirector failures .\nFigure 1 shows the overall BuddyCache architecture .\n3.1 Cache Coherence\nThe redirector maintains a directory of pages cached at each client to provide cooperative caching [ 20 , 16 , 13 , 2 , 28 ] , redirecting a client fetch request to another client that caches the requested object .\nIn addition , redirector manages cache coherence .\nSeveral efficient transactional cache coherence protocols [ 19 ] exist for persistent object storage systems .\nProtocols make different choices in granularity of data transfers and granularity of cache consistency .\nThe current best-performing protocols use page granularity transfers when clients fetch missing objects from a server and object granularity coherence to avoid false ( page-level ) conflicts .\nThe transactional caching taxonomy [ 19 ] proposed by Carey , Franklin and Livny classifies the coherence protocols into two main categories according to whether a protocol avoids or detects access to stale objects in the client cache .\nThe BuddyCache approach could be applied to both categories with different performance costs and benefits in each category .\nWe chose to investigate BuddyCache in the context of OCC [ 3 ] , the current best performing detection-based protocol .\nWe chose OCC because it is simple , performs well in high-latency networks , has been implemented and we had access to the implementation .\nWe are investigating BuddyCache with PSAA [ 33 ] , the best performing avoidancebased protocol .\nBelow we outline the OCC protocol [ 3 ] .\nThe OCC protocol uses object-level coherence .\nWhen a client requests a missing object , the server transfers the containing page .\nTransaction can read and update locally cached objects without server intervention .\nHowever , before a transaction commits it must be '' validated '' ; the server must make sure the validating transaction has not read a stale version of some object that was updated by a successfully committed or validated transaction .\nIf validation fails , the transaction is aborted .\nTo reduce the number and cost of aborts ,\nFigure 2 : Peer fetch Figure 3 : Peer update .\na server sends background object invalidation messages to clients caching the containing pages .\nWhen clients receive invalidations they remove stale objects from the cache and send background acknowledgments to let server know about this .\nSince invalidations remove stale objects from the client cache , invalidation acknowledgment indicates to the server that a client with no outstanding invalidations has read upto-date objects .\nAn unacknowledged invalidation indicates a stale object may have been accessed in the client cache .\nThe validation procedure at the server aborts a client transaction if a client reads an object while an invalidation is outstanding .\nThe '' acknowledged invalidation '' mechanism supports object-level cache coherence without object-based directories or per-object version numbers .\nAvoiding per-object overheads is very important to reduce performance penalties [ 3 ] of managing many small objects , since typical objects are small .\nAn important BuddyCache design goal is to maintain this benefit .\nSince in BuddyCache a page can be fetched into a client cache without server intervention ( as illustrated in figure 2 ) , cache directories at the servers keep track of pages cached in each collaborating group rather than each client .\nRedirector keeps track of pages cached in each client in a group .\nServers send to the redirector invalidations for pages cached in the entire group .\nThe redirector propagates invalidations from servers to affected clients .\nWhen all affected clients acknowledge invalidations , redirector can propagate the '' group acknowledgment '' to the server .\n3.2 Light-weight Peer Update\nWhen one of the clients in the collaborative group creates or modifies shared objects , the copies cached by any other client become stale but the new data is likely to be of potential interest to the group members .\nThe goal in BuddyCache is to provide group members with efficient and consistent access to updates committed within the group without imposing extra overhead on other parts of the storage system .\nThe two possible approaches to deal with stale data are cache invalidations and cache updates .\nCache coherence studies in web systems ( e.g. [ 7 ] ) DSM systems ( e.g. [ 5 ] ) , and transactional object systems ( e.g. [ 19 ] ) compare the benefits of update and invalidation .\nThe studies show the benefits are strongly workload-dependent .\nIn general , invalidation-based coherence protocols are efficient since invalidations are small , batched and piggybacked on other messages .\nMoreover , invalidation protocols match the current hardware trend for increasing client cache sizes .\nLarger caches are likely to contain much more data than is actively used .\nUpdate-based protocols that propagate updates to low-interest objects in a wide-area network would be wasteful .\nNevertheless , invalidation-based coherence protocols can perform poorly in high-latency networks [ 12 ] if the object 's new value is likely to be of interest to another group member .\nWith an invalidation-based protocol , one member 's update will invalidate another member 's cached copy , causing the latter to perform a high-latency fetch of the new value from the server .\nBuddyCache circumvents this well-known bandwidth vs. latency trade-off imposed by update and invalidation protocols in wide-area network environments .\nIt avoids the latency penalty of invalidations by using the redirector to retain and propagate updates committed by one client to other clients within the group .\nThis avoids the bandwidth penalty of updates because servers propagate invalidations to the redirectors .\nAs far as we know , this use of localized multicast in BuddyCache redirector is new and has not been used in earlier caching systems .\nThe peer update works as follows .\nAn update commit request from a client arriving at the redirector contains the object updates .\nRedirector retains the updates and propagates the request to the coordinating server .\nAfter the transaction commits , the coordinator server sends a commit reply to the redirector of the committing client group .\nThe redirector forwards the reply to the committing client , and also propagates the retained committed updates to the clients caching the modified pages ( see figure 3 ) .\nSince the groups outside the BuddyCache propagate invalidations , there is no extra overhead outside the committing group .\n3.3 Solo commit\nIn the OCC protocol , clients acknowledge server invalidations ( or updates ) to indicate removal of stale data .\nThe straightforward '' group acknowledgement '' protocol where redirector collects and propagates a collective acknowledge\nClient 1 Client 2 Redirector Server ment to the server , interferes with the availability property of the transactional caching protocol [ 19 ] since a client that is slow to acknowledge an invalidation or has failed can delay a group acknowledgement and prevent another client in the group from committing a transaction .\nE.g. an engineer that commits a repeated revision to the same shared design object ( and therefore holds the latest version of the object ) may need to abort if the '' group acknowledgement '' has not propagated to the server .\nConsider a situation depicted in figure 4 where Client1 commits a transaction T that reads the latest version of an object x on page P recently modified by Client1 .\nIf the commit request for T reaches the server before the collective acknowledgement from Client2 for the last modification of x arrives at the server , the OCC validation procedure considers x to be stale and aborts T ( because , as explained above , an invalidation unacknowledged by a client , acts as indication to the server that the cached object value is stale at the client ) .\nNote that while invalidations are not required for the correctness of the OCC protocol , they are very important for the performance since they reduce the performance penalties of aborts and false sharing .\nThe asynchronous invalidations are an important part of the reason OCC has competitive performance with PSAA [ 33 ] , the best performing avoidance-based protocol [ 3 ] .\nNevertheless , since invalidations are sent and processed asynchronously , invalidation processing may be arbitrarily delayed at a client .\nLease-based schemes ( time-out based ) have been proposed to improve the availability of hierarchical callback-based coherence protocols [ 32 ] but the asynchronous nature of invalidations makes the lease-based approaches inappropriate for asynchronous invalidations .\nThe Solo commit validation protocol allows a client with up-to-date objects to commit a transaction even if the group acknowledgement is delayed due to slow or crashed peers .\nThe protocol requires clients to include extra information with the transaction read sets in the commit message , to indicate to the server the objects read by the transaction are up-to-date .\nObject version numbers could provide a simple way to track up-to-date objects but , as mentioned above , maintaining per object version numbers imposes unacceptably high overheads ( in disk storage , I/O costs and directory size ) on the entire object system when objects are small [ 23 ] .\nInstead , solo commit uses coarse-grain page version numbers to identify fine-grain object versions .\nA page version number is incremented at a server when at transaction that modifies objects on the page commits .\nUpdates committed by a single transaction and corresponding invalidations are therefore uniquely identified by the modified page version number .\nPage version numbers are propagated to clients in fetch replies , commit replies and with invalidations , and clients include page version numbers in commit requests sent to the servers .\nIf a transaction fails validation due to missing '' group acknowledgement '' , the server checks page version numbers of the objects in the transaction read set and allows the transaction to commit if the client has read from the latest page version .\nThe page version numbers enable independent commits but page version checks only detect page-level conflicts .\nTo detect object-level conflicts and avoid the problem of false sharing we need the '' acknowledged invalidations '' .\nSection 4 describes the details of the implementation of solo commit support for fine-grain sharing .\n3.4 Group Configuration\nThe BuddyCache architecture supports multiple concurrent peer groups .\nPotentially , it may be faster to access data cached in another peer group than to access a remote server .\nIn such case extending BuddyCache protocols to support multi-level peer caching could be worthwhile .\nWe have not pursued this possibility for several reasons .\nIn web caching workloads , simply increasing the population of clients in a proxy cache often increases the overall cache hit rate [ 30 ] .\nIn BuddyCache applications , however , we expect sharing to result mainly from explicit client interaction and collaboration , suggesting that inter-group fetching is unlikely to occur .\nMoreover , measurements from multi-level web caching systems [ 9 ] indicate that a multilevel system may not be advantageous unless the network connection between the peer groups is very fast .\nWe are primarily interested in environments where closely collaborating peers have fast close-range connectivity , but the connection between peer groups may be slow .\nAs a result , we decided that support for inter-group fetching in BuddyCache is not a high priority right now .\nTo support heterogenous resource-rich and resource-poor peers , the BuddyCache redirector can be configured to run either in one of the peer nodes or , when available , in a separate node within the site infrastructure .\nMoreover , in a resource-rich infrastructure node , the redirector can be configured as a stand-by peer cache to receive pages fetched by other peers , emulating a central cache somewhat similar to a regional web proxy cache .\nFrom the BuddyCache cache coherence protocol point of view , however , such a stand-by peer cache is equivalent to a regular peer cache and therefore we do not consider this case separately in the discussion in this paper .\n4 .\nIMPLEMENTATION\nIn this section we provide the details of the BuddyCache implementation .\nWe have implemented BuddyCache in the Thor client/server object-oriented database [ 23 ] .\nThor supports high performance access to distributed objects and therefore provides a good test platform to investigate BuddyCache performance .\nFigure 4 : Validation with Slow Peers\n4.1 Base Storage System\nThor servers provide persistent storage for objects and clients cache copies of these objects .\nApplications run at the clients and interact with the system by making calls on methods of cached objects .\nAll method calls occur within atomic transactions .\nClients communicate with servers to fetch pages or to commit a transaction .\nThe servers have a disk for storing persistent objects , a stable transaction log , and volatile memory .\nThe disk is organized as a collection of pages which are the units of disk access .\nThe stable log holds commit information and object modifications for committed transactions .\nThe server memory contains cache directory and a recoverable modified object cache called the MOB .\nThe directory keeps track of which pages are cached by which clients .\nThe MOB holds recently modified objects that have not yet been written back to their pages on disk .\nAs MOB fills up , a background process propagates modified objects to the disk [ 21 , 26 ] .\n4.2 Base Cache Coherence\nTransactions are serialized using optimistic concurrency control OCC [ 3 ] described in Section 3.1 .\nWe provide some of the relevant OCC protocol implementation details .\nThe client keeps track of objects that are read and modified by its transaction ; it sends this information , along with new copies of modified objects , to the servers when it tries to commit the transaction .\nThe servers determine whether the commit is possible , using a two-phase commit protocol if the transaction used objects at multiple servers .\nIf the transaction commits , the new copies of modified objects are appended to the log and also inserted in the MOB .\nThe MOB is recoverable , i.e. if the server crashes , the MOB is reconstructed at recovery by scanning the log .\nSince objects are not locked before being used , a transaction commit can cause caches to contain obsolete objects .\nServers will abort a transaction that used obsolete objects .\nHowever , to reduce the probability of aborts , servers notify clients when their objects become obsolete by sending them invalidation messages ; a server uses its directory and the information about the committing transaction to determine what invalidation messages to send .\nInvalidation messages are small because they simply identify obsolete objects .\nFurthermore , they are sent in the background , batched and piggybacked on other messages .\nWhen a client receives an invalidation message , it removes obsolete objects from its cache and aborts the current transaction if it used them .\nThe client continues to retain pages containing invalidated objects ; these pages are now incomplete with '' holes '' in place of the invalidated objects .\nPerforming invalidation on an object basis means that false sharing does not cause unnecessary aborts ; keeping incomplete pages in the client cache means that false sharing does not lead to unnecessary cache misses .\nClients acknowledge invalidations to indicate removal of stale data as explained in Section 3.1 .\nInvalidation messages prevent some aborts , and accelerate those that must happen -- thus wasting less work and offloading detection of aborts from servers to clients .\nWhen a transaction aborts , its client restores the cached copies of modified objects to the state they had before the transaction started ; this is possible because a client makes a copy of an object the first time it is modified by a transaction .\n4.3 Redirection\nThe redirector runs on the same local network as the peer group , in one of the peer nodes , or in a special node within the infrastructure .\nIt maintains a directory of pages available in the peer group and provides fast centralized fetch redirection ( see figure 2 ) between the peer caches .\nTo improve performance , clients inform the redirector when they evict pages or objects by piggybacking that information on messages sent to the redirector .\nTo ensure up-to-date objects are fetched from the group cache the redirector tracks the status of the pages .\nA cached page is either complete in which case it contains consistent values for all the objects , or incomplete , in which case some of the objects on a page are marked invalid .\nOnly complete pages are used by the peer fetch .\nThe protocol for maintaining page status when pages are updated and invalidated is described in Section 4.4 .\nWhen a client request has to be processed at the servers , e.g. , a complete requested page is unavailable in the peer group or a peer needs to commit a transaction , the redirector acts as a server proxy : it forwards the request to the server , and then forwards the reply back to the client .\nIn addition , in response to invalidations sent by a server , the redirector distributes the update or invalidation information to clients caching the modified page and , after all clients acknowledge , propagates the group acknowledgment back to the server ( see figure 3 ) .\nThe redirector-server protocol is , in effect , the client-server protocol used in the base Thor storage system , where the combined peer group cache is playing the role of a single client cache in the base system .\n4.4 Peer Update\nThe peer update is implemented as follows .\nAn update commit request from a client arriving at the redirector contains the object updates .\nRedirector retains the updates and propagates the request to the coordinator server .\nAfter a transaction commits , using a two phase commit if needed , the coordinator server sends a commit reply to the redirector of the committing client group .\nThe redirector forwards the reply to the committing client .\nIt waits for the invalidations to arrive to propagate corresponding retained ( committed ) updates to the clients caching the modified pages ( see figure 3 . )\nParticipating servers that are home to objects modified by the transaction generate object invalidations for each cache group that caches pages containing the modified objects ( including the committing group ) .\nThe invalidations are sent lazily to the redirectors to ensure that all the clients in the groups caching the modified objects get rid of the stale data .\nIn cache groups other than the committing group , redirectors propagates the invalidations to all the clients caching the modified pages , collect the client acknowledgments and after completing the collection , propagate collective acknowledgments back to the server .\nWithin the committing client group , the arriving invalidations are not propagated .\nInstead , updates are sent to clients caching those objects ' pages , the updates are acknowledged by the client , and the collective acknowledgment is propagated to the server .\nAn invalidation renders a cached page unavailable for peer fetch changing the status of a complete page p into an incomplete .\nIn contrast , an update of a complete page preserves the complete page status .\nAs shown by studies of the\nfragment reconstruction [ 2 ] , such update propagation allows to avoid the performance penalties of false sharing .\nThat is , when clients within a group modify different objects on the same page , the page retains its complete status and remains available for peer fetch .\nTherefore , the effect of peer update is similar to '' eager '' fragment reconstruction [ 2 ] .\nWe have also considered the possibility of allowing a peer to fetch an incomplete page ( with invalid objects marked accordingly ) but decided against this possibility because of the extra complexity involved in tracking invalid objects .\n4.5 Vcache\nThe solo commit validation protocol allows clients with up-to-date objects to commit independently of slower ( or failed ) group members .\nAs explained in Section 3.3 , the solo commit protocol allows a transaction T to pass validation if extra coherence information supplied by the client indicates that transaction T has read up-to-date objects .\nClients use page version numbers to provide this extra coherence information .\nThat is , a client includes the page version number corresponding to each object in the read object set sent in the commit request to the server .\nSince a unique page version number corresponds to each committed object update , the page version number associated with an object allows the validation procedure at the server to check if the client transaction has read up-to-date objects .\nThe use of coarse-grain page versions to identify object versions avoids the high penalty of maintaining persistent object versions for small objects , but requires an extra protocol at the client to maintain the mapping from a cached object to the identifying page version ( ObjectToVersion ) .\nThe main implementation issue is concerned with maintaining this mapping efficiently .\nAt the server side , when modifications commit , servers associate page version numbers with the invalidations .\nAt validation time , if an unacknowledged invalidation is pending for an object x read by a transaction T , the validation procedure checks if the version number for x in T 's read set matches the version number for highest pending invalidation for x , in which case the object value is current , otherwise T fails validation .\nWe note again that the page version number-based checks , and the invalidation acknowledgment-based checks are complimentary in the solo commit validation and both are needed .\nThe page version number check allows the validation to proceed before invalidation acknowledgments arrive but by itself a page version number check detects page-level conflicts and is not sufficient to support fine-grain coherence without the object-level invalidations .\nWe now describe how the client manages the mapping ObjectToVersion .\nThe client maintains a page version number for each cached page .\nThe version number satisfies the following invariant VP about the state of objects on a page : if a cached page P has a version number v , then the value of an object o on a cached page P is either invalid or it reflects at least the modifications committed by transactions preceding the transaction that set P 's version number to v. New object values and new page version numbers arrive when a client fetches a page or when a commit reply or invalidations arrive for this page .\nThe new object values modify the page and , therefore , the page version number needs to be updated to maintain the invariant VP .\nA page version number that arrives when a client fetches a page , replaces\nFigure 5 : Reordered Invalidations\nthe page version number for this page .\nSuch an update preserves the invariant VP .\nSimilarly , an in-sequence page version number arriving at the client in a commit or invalidation message advances the version number for the entire cached page , without violating V P. However , invalidations or updates and their corresponding page version numbers can also arrive at the client out of sequence , in which case updating the page version number could violate V P. For example , a commit reply for a transaction that updates object x on page P in server S1 , and object y on page Q in server S2 , may deliver a new version number for P from the transaction coordinator S1 before an invalidation generated for an earlier transaction that has modified object r on page P arrives from S1 ( as shown in figure 5 ) .\nThe cache update protocol ensures that the value of any object o in a cached page P reflects the update or invalidation with the highest observed version number .\nThat is , obsolete updates or invalidations received out of sequence do not affect the value of an object .\nTo maintain the ObjectToVersion mapping and the invariant V P in the presence of out-of-sequence arrival of page version numbers , the client manages a small version number cache vcache that maintains the mapping from an object into its corresponding page version number for all reordered version number updates until a complete page version number sequence is assembled .\nWhen the missing version numbers for the page arrive and complete a sequence , the version number for the entire page is advanced .\nThe ObjectToVersion mapping , including the vcache and page version numbers , is used at transaction commit time to provide version numbers for the read object set as follows .\nIf the read object has an entry in the vcache , its version number is equal to the highest version number in the vcache for this object .\nIf the object is not present in the vcache , its version number is equal the version number of its containing cached page .\nFigure 6 shows the ObjectToVersion mapping in the client cache , including the page version numbers for pages and the vcache .\nClient can limit vcache size as needed since re-fetching a page removes all reordered page version numbers from the vcache .\nHowever , we expect version number reordering to be uncommon and therefore expect the vcache to be very small .\n5 .\nBUDDYCACHE FAILOVER\nFigure 6 : ObjectToVersion map with vcache\nrector that can fail independently .\nThe goal of the failover protocol is to reconfigure the BuddyCache in the case of a node failure , so that the failure of one node does not disrupt other clients from accessing shared objects .\nMoreover , the failure of the redirector should allow unaffected clients to keep their caches intact .\nWe have designed a failover protocols for BuddyCache but have not implemented it yet .\nThe appendix outlines the protocol .\n6 .\nPERFORMANCE EVALUATION\nBuddyCache redirection supports the performance benefits of avoiding communication with the servers but introduces extra processing cost due to availability mechanisms and request forwarding .\nIs the '' cure '' worse then the '' disease ? ''\nTo answer the question , we have implemented a BuddyCache prototype for the OCC protocol and conducted experiments to analyze the performance benefits and costs over a range of network latencies .\n6.1 Analysis\nThe performance benefits of peer fetch and peer update are due to avoided server interactions .\nThis section presents a simple analytical performance model for this benefit .\nThe avoided server interactions correspond to different types of client cache misses .\nThese can be cold misses , invalidation misses and capacity misses .\nOur analysis focuses on cold misses and invalidation misses , since the benefit of avoiding capacity misses can be derived from the cold misses .\nMoreover , technology trends indicate that memory and storage capacity will continue to grow and therefore a typical BuddyCache configuration is likely not to be cache limited .\nThe client cache misses are determined by several variables , including the workload and the cache configuration .\nOur analysis tries , as much as possible , to separate these variables so they can be controlled in the validation experiments .\nTo study the benefit of avoiding cold misses , we consider cold cache performance in a read-only workload ( no invalidation misses ) .\nWe expect peer fetch to improve the latency cost for client cold cache misses by fetching objects from nearby cache .\nWe evaluate how the redirection cost affects this benefit by comparing and analyzing the performance of an application running in a storage system with BuddyCache and without ( called Base ) .\nTo study the benefit of avoiding invalidation misses , we consider hot cache performance in a workload with modifications ( with no cold misses ) .\nIn hot caches we expect BuddyCache to provide two complementary benefits , both of which reduce the latency of access to shared modified objects .\nPeer update lets a client access an object modified by a nearby collaborating peer without the delay imposed by invalidation-only protocols .\nIn groups where peers share a read-only interest in the modified objects , peer fetch allows a client to access a modified object as soon as a collaborating peer has it , which avoids the delay of server fetch without the high cost imposed by the update-only protocols .\nTechnology trends indicate that both benefits will remain important in the foreseeable future .\nThe trend toward increase in available network bandwidth decreases the cost of the update-only protocols .\nHowever , the trend toward increasingly large caches , that are updated when cached objects are modified , makes invalidation-base protocols more attractive .\nTo evaluate these two benefits we consider the performance of an application running without BuddyCache with an application running BuddyCache in two configurations .\nOne , where a peer in the group modifies the objects , and another where the objects are modified by a peer outside the group .\nPeer update can also avoid invalidation misses due to false-sharing , introduced when multiple peers update different objects on the same page concurrently .\nWe do not analyze this benefit ( demonstrated by earlier work [ 2 ] ) because our benchmarks do not allow us to control object layout , and also because this benefit can be derived given the cache hit rate and workload contention .\n6.1.1 The Model\nThe model considers how the time to complete an execution with and without BuddyCache is affected by invalidation misses and cold misses .\nConsider k clients running concurrently accessing uniformly a shared set of N pages in BuddyCache ( BC ) and Base .\nLet tfetch ( S ) , tredirect ( S ) , tcommit ( S ) , and tcompute ( S ) be the time it takes a client to , respectively , fetch from server , peer fetch , commit a transaction and compute in a transaction , in a system S , where S is either a system with BuddyCache ( BC ) or without ( Base ) .\nFor simplicity , our model assumes the fetch and commit times are constant .\nIn general they may vary with the server load , e.g. they depend on the total number of clients in the system .\nThe number of misses avoided by peer fetch depends on k , the number of clients in the BuddyCache , and on the client co-interest in the shared data .\nIn a specific BuddyCache execution it is modeled by the variable r , defined as a number of fetches arriving at the redirector for a given `` version '' of page P ( i.e. until an object on the page is invalidated ) .\nConsider an execution with cold misses .\nA client starts with a cold cache and runs read-only workload until it accesses all N pages while committing l transactions .\nWe assume there are no capacity misses , i.e. the client cache is large enough to hold N pages .\nIn BC , r cold misses for page P reach the redirector .\nThe first of the misses fetches P from the server , and the subsequent r \u2212 1 misses are redirected .\nSince each client accesses the entire shared set r = k. Let Tcold ( Base ) and Tcold ( BC ) be the time it takes to complete the l transactions in Base and BC .\nConsider next an execution with invalidation misses .\nA client starts with a hot cache containing the working set of N pages .\nWe focus on a simple case where one client ( writer ) runs a workload with modifications , and the other clients ( readers ) run a read-only workload .\nIn a group containing the writer ( BCW ) , peer update eliminates all invalidation misses .\nIn a group containing only readers ( BCR ) , during a steady state execution with uniform updates , a client transaction has missinv invalidation misses .\nConsider the sequence of r client misses on page P that arrive at the redirector in BCR between two consequent invalidations of page P .\nThe first miss goes to the server , and the r -- 1 subsequent misses are redirected .\nUnlike with cold misses , r < k because the second invalidation disables redirection for P until the next miss on P causes a server fetch .\nAssuming uniform access , a client invalidation miss has a chance of 1/r to be the first miss ( resulting in server fetch ) , and a chance of ( 1 -- 1/r ) to be redirected .\nLet Tinval ( Base ) , Tinval ( BCR ) and Tinval ( BCW ) be the time it takes to complete a single transaction in the Base , BCR and BCW systems .\nIn the experiments described below , we measure the parameters N , r , missinv , tfetch ( S ) , tredirect ( S ) , tcommit ( S ) , and tcompute ( S ) .\nWe compute the completion times derived using the above model and derive the benefits .\nWe then validate the model by comparing the derived values to the completion times and benefits measured directly in the experiments .\n6.2 Experimental Setup\nBefore presenting our results we describe our experimental setup .\nWe use two systems in our experiments .\nThe Base system runs Thor distributed object storage system [ 23 ] with clients connecting directly to the servers .\nThe Buddy system runs our implementation of BuddyCache prototype in Thor , supporting peer fetch , peer update , and solo commit , but not the failover .\nOur workloads are based on the multi-user OO7 benchmark [ 8 ] ; this benchmark is intended to capture the characteristics of many different multi-user CAD/CAM/CASE applications , but does not model any specific application .\nWe use OO7 because it is a standard benchmark for measuring object storage system performance .\nThe OO7 database contains a tree of assembly objects with leaves pointing to three composite parts chosen randomly from among 500 such objects .\nEach composite part contains a graph of atomic parts linked by connection objects ; each atomic part has 3 outgoing connections .\nWe use a medium database that has 200 atomic parts per composite part .\nThe multi-user database allocates for each client a '' private '' module consisting of one tree of assembly objects , and adds an extra '' shared '' module that scales proportionally to the number of clients .\nWe expect a typical BuddyCache configuration not to be cache limited and therefore focus on workloads where the objects in the client working set fit in the cache .\nSince the goal of our study is to evaluate how effectively our techniques deal with access to shared objects , in our study we limit client access to shared data only .\nThis allows us to study the effect our techniques have on cold cache and cache consistency misses and isolate as much as possible the effect of cache capacity misses .\nTo keep the length of our experiments reasonable , we use small caches .\nThe OO7 benchmark generates database modules of predefined size .\nIn our implementation of OO7 , the '' private '' module size is about 38MB .\nTo make sure that the entire working set fits into the cache we use a single private module and choose a cache size of 40MB for each client .\nThe OO7 database is generated with modules for 3 clients , only one of which is used in our experiments as we explain above .\nThe objects in the database are clustered in 8K pages , which are also the unit of transfer in the fetch requests .\nWe consider two types of transaction workloads in our analysis , read-only and read-write .\nIn OO7 benchmark , read-only transactions use the T1 traversal that performs a depth-first traversal of entire composite part graph .\nWrite transactions use the T2b traversal that is identical to T1 except that it modifies all the atomic parts in a single composite .\nA single transaction includes one traversal and there is no sleep time between transactions .\nBoth read-only and read-write transactions always work with data from the same module .\nClients running read-write transactions do n't modify in every transaction , instead they have a 50 % probability of running read-only transactions .\nThe database was stored by a server on a 40GB IBM 7200RPM hard drive , with a 8.5 average seek time and 40 MB/sec data transfer rates .\nIn Base system clients connect directly to the database .\nIn Buddy system clients connect to the redirector that connects to the database .\nWe run the experiments with 1-10 clients in Base , and one or two 1-10 client groups in Buddy .\nThe server , the clients and the redirectors ran on a 850MHz Intel Pentium III processor based PC , 512MB of memory , and Linux Red Hat 6.2 .\nThey were connected by a 100Mb/s Ethernet .\nThe server was configured with a 50MB cache ( of which 6MB were used for the modified object buffer ) , the client had a 40MB cache .\nThe experiments ran in Utah experimental testbed emulab.net [ 1 ] .\n* tfetch ( BCR )\nTable 1 : Commit and Server fetch\nTable 2 : Peer fetch\n6.3 Basic Costs\nThis section analyzes the basic cost of the requests in the Buddy system during the OO7 runs .\n6.3.1 Redirection\nFetch and commit requests in the BuddyCache cross the redirector , a cost not incurred in the Base system .\nFor a request redirected to the server ( server fetch ) the extra cost of redirection includes a local request from the client to redirector on the way to and from the server .\nWe evaluate this latency overhead indirectly by comparing the measured latency of the Buddy system server fetch or commit request with the measured latency of the corresponding request in the Base system .\nTable 1 shows the latency for the commit and server fetch requests in the Base and Buddy system for 3 client and 5 client groups in a fast local area network .\nAll the numbers were computed by averaging measured request latency over 1000 requests .\nThe measurements show that the redirection cost of crossing the redirector in not very high even in a local area network .\nThe commit cost increases with the number of clients since commits are processed sequentially .\nThe fetch cost does not increase as much because the server cache reduces this cost .\nIn a large system with many groups , however , the server cache becomes less efficient .\nTo evaluate the overheads of the peer fetch , we measure the peer fetch latency ( PeerFetch ) at the requesting client and break down its component costs .\nIn peer fetch , the cost of the redirection includes , in addition to the local network request cost , the CPU processing latency of crossing the redirector and crossing the helper , the latter including the time to process the help request and the time to copy , and unswizzle the requested page .\nWe directly measured the time to copy and unswizzle the requested page at the helper , ( CopyUnswizzle ) , and timed the crossing times using a null crossing request .\nTable 2 summarizes the latencies that allows us to break down the peer fetch costs .\nCrossRedirector , includes the CPU latency of crossing the redirector plus a local network round-trip and is measured by timing a round-trip null request issued by a client to the redirector .\nAlertHelper , includes the time for the helper to notice the request plus a network roundtrip , and is measured by timing a round-trip null request issued from an auxiliary client to the helper client .\nThe local network latency is fixed and less than 0.1 ms. The AlertHelper latency which includes the elapsed time from the help request arrival until the start of help request processing is highly variable and therefore contributes to the high variability of the PeerFetch time .\nThis is because the client in Buddy system is currently single threaded and therefore only starts processing a help request when blocked waiting for a fetch - or commit reply .\nThis overhead is not inherent to the BuddyCache architecture and could be mitigated by a multi-threaded implementation in a system with pre-emptive scheduling .\n6.3.2 Version Cache\nThe solo commit allows a fast client modifying an object to commit independently of a slow peer .\nThe solo commit mechanism introduces extra processing at the server at transaction validation time , and extra processing at the client at transaction commit time and at update or invalidation processing time .\nThe server side overheads are minimal and consist of a page version number update at commit time , and a version number comparison at transaction validation time .\nThe version cache has an entry only when invalidations or updates arrive out of order .\nThis may happen when a transaction accesses objects in multiple servers .\nOur experiments run in a single server system and therefore , the commit time overhead of version cache management at the client does not contribute in the results presented in the section below .\nTo gauge these client side overheads in a multiple server system , we instrumented the version cache implementation to run with a workload trace that included reordered invalidations and timed the basic operations .\nThe extra client commit time processing includes a version cache lookup operation for each object read by the transaction at commit request preparation time , and a version cache insert operation for each object updated by a transaction at commit reply processing time , but only if the updated page is missing some earlier invalidations or updates .\nIt is important that the extra commit time costs are kept to a minimum since client is synchronously waiting for the commit completion .\nThe measurements show that in the worst case , when a large number of invalidations arrive out of order , and about half of the objects modified by T2a ( 200 objects ) reside on reordered pages , the cost of updating the version cache is 0.6 ms. The invalidation time cost are comparable , but since invalidations and updates are processed in the background this cost is less important for the overall performance .\nWe are currently working on optimizing the version cache implementation to further reduce these costs .\n6.4 Overall Performance\nThis section examines the performance gains seen by an application running OO7 benchmark with a BuddyCache in a wide area network .\n6.4.1 Cold Misses\nTo evaluate the performance gains from avoiding cold misses we compare the cold cache performance of OO7 benchmark running read-only workload in the Buddy and Base systems .\nWe derive the times by timing the execution of the systems in the local area network environment and substituting 40 ms and 80 ms delays for the requests crossing the redirector and the server to estimate the performance in the wide-area-network .\nFigures 7 and 8 show the overall time to complete 1000 cold cache transactions .\nThe numbers were\nFigure 7 : Breakdown for cold read-only 40ms RTT\nFigure 8 : Breakdown for cold read-only 80ms RTT Figure 9 : Cold miss benefit\nFigure 10 : Breakdown for hot read-write 40ms RTT\nobtained by averaging the overall time of each client in the group .\nThe results show that in a 40 ms network Buddy system reduces significantly the overall time compared to the Base system , providing a 39 % improvement in a three client group , 46 % improvement in the five client group and 56 % improvement in the ten client case .\nThe overall time includes time spent performing client computation , direct fetch requests , peer fetches , and commit requests .\nIn the three client group , Buddy and Base incur almost the same commit cost and therefore the entire performance benefit of Buddy is due to peer fetch avoiding direct fetches .\nIn the five and ten client group the server fetch cost for individual client decreases because with more clients faulting in a fixed size shared module into BuddyCache , each client needs to perform less server fetches .\nFigure 8 shows the overall time and cost break down in the 80 ms network .\nThe BuddyCache provides similar performance improvements as with the 40ms network .\nHigher network latency increases the relative performance advantage provided by peer fetch relative to direct fetch but this benefit is offset by the increased commit times .\nFigure 9 shows the relative latency improvement provided by BuddyCache ( computed as the overall measured time difference between Buddy and Base relative to Base ) as a function of network latency , with a fixed server load .\nThe cost of the extra mechanism dominates BuddyCache benefit when network latency is low .\nAt typical Internet latencies 20ms-60ms the benefit increases with latency and levels off around 60ms with significant ( up to 62 % for ten clients ) improvement .\nFigure 9 includes both the measured improvement and the improvement derived using the analytical model.Remarkably , the analytical results predict the measured improvement very closely , albeit being somewhat higher than the empirical values .\nThe main reason why the simplified model works well is it captures the dominant performance component , network latency cost .\n6.4.2 Invalidation Misses\nTo evaluate the performance benefits provided by BuddyCache due to avoided invalidation misses , we compared the hot cache performance of the Base system with two different Buddy system configurations .\nOne of the Buddy system configurations represents a collaborating peer group modifying shared objects ( Writer group ) , the other represents a group where the peers share a read-only interest in the modified objects ( Reader group ) and the writer resides outside the BuddyCache group .\nIn each of the three systems , a single client runs a readwrite workload ( writer ) and three other clients run read-only workload ( readers ) .\nBuddy system with one group contain\nFigure 11 : Breakdown for hot read-write 80ms RTT\ning a single reader and another group containing two readers and one writer models the Writer group .\nBuddy system with one group containing a single writer and another group running three readers models the Reader group .\nIn Base , one writer and three readers access the server directly .\nThis simple configuration is sufficient to show the impact of BuddyCache techniques .\nFigures 10 and 11 show the overall time to complete 1000 hot cache OO7 read-only transactions .\nWe obtain the numbers by running 2000 transactions to filter out cold misses and then time the next 1000 transactions .\nHere again , the reported numbers are derived from the local area network experiment results .\nThe results show that the BuddyCache reduces significantly the completion time compared to the Base system .\nIn a 40 ms network , the overall time in the Writer group improves by 62 % compared to Base .\nThis benefit is due to peer update that avoids all misses due to updates .\nThe overall time in the Reader group improves by 30 % and is due to peer fetch that allows a client to access an invalidated object at the cost of a local fetch avoiding the delay of fetching from the server .\nThe latter is an important benefit because it shows that on workloads with updates , peer fetch allows an invalidation-based protocol to provide some of the benefits of update-based protocol .\nNote that the performance benefit delivered by the peer fetch in the Reader group is approximately 50 % less than the performance benefit delivered by peer update in the Writer group .\nThis difference is similar in 80ms network .\nFigure 12 shows the relative latency improvement provided by BuddyCache in Buddy Reader and Buddy Writer configurations ( computed as the overall time difference between BuddyReader and Base relative to Base , and Buddy Writer and Base relative to Base ) in a hot cache experiment as a function of increasing network latency , for fixed server load .\nThe peer update benefit dominates overhead in Writer configuration even in low-latency network ( peer update incurs minimal overhead ) and offers significant 44-64 % improvement for entire latency range .\nThe figure includes both the measured improvement and the improvement derived using the analytical model .\nAs in cold cache experiments , here the analytical results predict the measured improvement closely .\nThe difference is\nFigure 12 : Invalidation miss benefit\nminimal in the ' writer group ' , and somewhat higher in the 're ader group ' ( consistent with the results in the cold cache experiments ) .\nAs in cold cache case , the reason why the simplified analytical model works well is because it captures the costs of network latency , the dominant performance cost .\n7 .\nCONCLUSION\nCollaborative applications provide a shared work environment for groups of networked clients collaborating on a common task .\nThey require strong consistency for shared persistent data and efficient access to fine-grained objects .\nThese properties are difficult to provide in wide-area network because of high network latency .\nThis paper described BuddyCache , a new transactional cooperative caching [ 20 , 16 , 13 , 2 , 28 ] technique that improves the latency of access to shared persistent objects for collaborative strong-consistency applications in high-latency network environments .\nThe technique improves performance yet provides strong correctness and availability properties in the presence of node failures and slow clients .\nBuddyCache uses redirection to fetch missing objects directly from group members caches , and to support peer update , a new lightweight '' application-level multicast '' technique that gives group members consistent access to the new data committed within the collaborating group without imposing extra overhead outside the group .\nRedirection , however , can interfere with object availability .\nSolo commit , is a new validation technique that allows a client in a group to commit independently of slow or failed peers .\nIt provides fine-grained validation using inexpensive coarse-grain version information .\nWe have designed and implemented BuddyCache prototype in Thor distributed transactional object storage system [ 23 ] and evaluated the benefits and costs of the system over a range of network latencies .\nAnalytical results , supported by the system measurements using the multi-user 007 benchmark indicate , that for typical Internet latencies BuddyCache provides significant performance benefits , e.g. for latencies ranging from 40 to 80 milliseconds round trip time , clients using the BuddyCache can reduce by up to 50 % the latency of access to shared objects compared to the clients accessing the repository directly .\nThe main contributions of the paper are :\nfine-grain strong-consistency access in high-latency environments , 2 .\nan implementation of the system prototype that yields strong performance gains over the base system , 3 .\nanalytical and measurement based performance evaluation of the costs and benefits of the new techniques capturing the dominant performance cost , high network latency ."} {"id": "H-13", "title": "", "abstract": "", "keyphrases": ["clickthrough pattern", "caption featur", "web search behavior", "human factor", "extract summar", "snippet", "queri log", "queri re-formul", "signific word", "clickthrough invers", "queri term match", "web search", "summar"], "prmu": [], "lvl-1": "The Influence of Caption Features on Clickthrough Patterns in Web Search Charles L. A. Clarke Eugene Agichtein Susan Dumais and Ryen W. White University of Waterloo Emory University Microsoft Research ABSTRACT Web search engines present lists of captions, comprising title, snippet, and URL, to help users decide which search results to visit.\nUnderstanding the influence of features of these captions on Web search behavior may help validate algorithms and guidelines for their improved generation.\nIn this paper we develop a methodology to use clickthrough logs from a commercial search engine to study user behavior when interacting with search result captions.\nThe findings of our study suggest that relatively simple caption features such as the presence of all terms query terms, the readability of the snippet, and the length of the URL shown in the caption, can significantly influence users'' Web search behavior.\nCategories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval-search process General Terms Experimentation, Human Factors 1.\nINTRODUCTION The major commercial Web search engines all present their results in much the same way.\nEach search result is described by a brief caption, comprising the URL of the associated Web page, a title, and a brief summary (or snippet) describing the contents of the page.\nOften the snippet is extracted from the Web page itself, but it may also be taken from external sources, such as the human-generated summaries found in Web directories.\nFigure 1 shows a typical Web search, with captions for the top three results.\nWhile the three captions share the same basic structure, their content differs in several respects.\nThe snippet of the third caption is nearly twice as long as that of the first, while the snippet is missing entirely from the second caption.\nThe title of the third caption contains all of the query terms in order, while the titles of the first and second captions contain only two of the three terms.\nOne of the query terms is repeated in the first caption.\nAll of the query terms appear in the URL of the third caption, while none appear in the URL of the first caption.\nThe snippet of the first caption consists of a complete sentence that concisely describes the associated page, while the snippet of the third caption consists of two incomplete sentences that are largely unrelated to the overall contents of the associated page and to the apparent intent of the query.\nWhile these differences may seem minor, they may also have a substantial impact on user behavior.\nA principal motivation for providing a caption is to assist the user in determining the relevance of the associated page without actually having to click through to the result.\nIn the case of a navigational query - particularly when the destination is well known - the URL alone may be sufficient to identify the desired page.\nBut in the case of an informational query, the title and snippet may be necessary to guide the user in selecting a page for further study, and she may judge the relevance of a page on the basis of the caption alone.\nWhen this judgment is correct, it can speed the search process by allowing the user to avoid unwanted material.\nWhen it fails, the user may waste her time clicking through to an inappropriate result and scanning a page containing little or nothing of interest.\nEven worse, the user may be misled into skipping a page that contains desired information.\nAll three of the results in figure 1 are relevant, with some limitations.\nThe first result links to the main Yahoo Kids!\nhomepage, but it is then necessary to follow a link in a menu to find the main page for games.\nDespite appearances, the second result links to a surprisingly large collection of online games, primarily with environmental themes.\nThe third result might be somewhat disappointing to a user, since it leads to only a single game, hosted at the Centers for Disease Control, that could not reasonably be described as online.\nUnfortunately, these page characteristics are not entirely reflected in the captions.\nIn this paper, we examine the influence of caption features on user``s Web search behavior, using clickthroughs extracted from search engines logs as our primary investigative tool.\nUnderstanding this influence may help to validate algorithms and guidelines for the improved generation of the Figure 1: Top three results for the query: kids online games.\ncaptions themselves.\nIn addition, these features can play a role in the process of inferring relevance judgments from user behavior [1].\nBy better understanding their influence, better judgments may result.\nDifferent caption generation algorithms might select snippets of different lengths from different areas of a page.\nSnippets may be generated in a query-independent fashion, providing a summary of the page as a whole, or in a querydependent fashion, providing a summary of how the page relates to the query terms.\nThe correct choice of snippet may depend on aspects of both the query and the result page.\nThe title may be taken from the HTML header or extracted from the body of the document [8].\nFor links that re-direct, it may be possible to display alternative URLs.\nMoreover, for pages listed in human-edited Web directories such as the Open Directory Project1 , it may be possible to display alternative titles and snippets derived from these listings.\nWhen these alternative snippets, titles and URLs are available, the selection of an appropriate combination for display may be guided by their features.\nA snippet from a Web directory may consist of complete sentences and be less fragmentary than an extracted snippet.\nA title extracted from the body may provide greater coverage of the query terms.\nA URL before re-direction may be shorter and provide a clearer idea of the final destination.\nThe work reported in this paper was undertaken in the context of the Windows Live search engine.\nThe image in figure 1 was captured from Windows Live and cropped to eliminate branding, advertising and navigational elements.\nThe experiments reported in later sections are based on Windows Live query logs, result pages and relevance judgments collected as part of ongoing research into search engine performance [1,2].\nNonetheless, given the similarity of caption formats across the major Web search engines we believe the results are applicable to these other engines.\nThe query in 1 www.dmoz.org figure 1 produces results with similar relevance on the other major search engines.\nThis and other queries produce captions that exhibit similar variations.\nIn addition, we believe our methodology may be generalized to other search applications when sufficient clickthrough data is available.\n2.\nRELATED WORK While commercial Web search engines have followed similar approaches to caption display since their genesis, relatively little research has been published about methods for generating these captions and evaluating their impact on user behavior.\nMost related research in the area of document summarization has focused on newspaper articles and similar material, rather than Web pages, and has conducted evaluations by comparing automatically generated summaries with manually generated summaries.\nMost research on the display of Web results has proposed substantial interface changes, rather than addressing details of the existing interfaces.\n2.1 Display of Web results Varadarajan and Hristidis [16] are among the few who have attempted to improve directly upon the snippets generated by commercial search systems, without introducing additional changes to the interface.\nThey generated snippets from spanning trees of document graphs and experimentally compared these snippets against the snippets generated for the same documents by the Google desktop search system and MSN desktop search system.\nThey evaluated their method by asking users to compare snippets from the various sources.\nCutrell and Guan [4] conducted an eye-tracking study to investigate the influence of snippet length on Web search performance and found that the optimal snippet length varied according to the task type, with longer snippets leading to improved performance for informational tasks and shorter snippets for navigational tasks.\nMany researchers have explored alternative methods for displaying Web search results.\nDumais et al. [5] compared an interface typical of those used by major Web search engines with one that groups results by category, finding that users perform search tasks faster with the category interface.\nPaek et al. [12] propose an interface based on a fisheye lens, in which mouse hovers and other events cause captions to zoom and snippets to expand with additional text.\nWhite et al. [17] evaluated three alternatives to the standard Web search interface: one that displays expanded summaries on mouse hovers, one that displays a list of top ranking sentences extracted from the results taken as a group, and one that updates this list automatically through implicit feedback.\nThey treat the length of time that a user spends viewing a summary as an implicit indicator of relevance.\nTheir goal was to improve the ability of users to interact with a given result set, helping them to look beyond the first page of results and to reduce the burden of query re-formulation.\n2.2 Document summarization Outside the narrow context of Web search considerable related research has been undertaken on the problem of document summarization.\nThe basic idea of extractive summarization - creating a summary by selecting sentences or fragments - goes back to the foundational work of Luhn [11].\nLuhn``s approach uses term frequencies to identify significant words within a document and then selects and extracts sentences that contain significant words in close proximity.\nA considerable fraction of later work may be viewed as extending and tuning this basic approach, developing improved methods for identifying significant words and selecting sentences.\nFor example, a recent paper by Sun et al. [14] describes a variant of Luhn``s algorithm that uses clickthrough data to identify significant words.\nAt its simplest, snippet generation for Web captions might also be viewed as following this approach, with query terms taking on the role of significant words.\nSince 2000, the annual Document Understanding Conference (DUC) series, conducted by the US National Institute of Standards and Technology, has provided a vehicle for evaluating much of the research in document summarization2 .\nEach year DUC defines a methodology for one or more experimental tasks, and supplies the necessary test documents, human-created summaries, and automatically extracted baseline summaries.\nThe majority of participating systems use extractive summarization, but a number attempt natural language generation and other approaches.\nEvaluation at DUC is achieved through comparison with manually generated summaries.\nOver the years DUC has included both single-document summarization and multidocument summarization tasks.\nThe main DUC 2007 task is posed as taking place in a question answering context.\nGiven a topic and 25 documents, participants were asked to generate a 250-word summary satisfying the information need enbodied in the topic.\nWe view our approach of evaluating summarization through the analysis of Web logs as complementing the approach taken at DUC.\nA number of other researchers have examined the value of query-dependent summarization in a non-Web context.\nTombros and Sanderson [15] compared the performance of 20 subjects searching a collection of newspaper articles when 2 duc.nist.gov guided by query-independent vs. query-dependent snippets.\nThe query-independent snippets were created by extracting the first few sentences of the articles; the query-dependent snippets were created by selecting the highest scoring sentences under a measure biased towards sentences containing query terms.\nWhen query-dependent summaries were presented, subjects were better able to identify relevant documents without clicking through to the full text.\nGoldstein et al. [6] describe another extractive system for generating query-dependent summaries from newspaper articles.\nIn their system, sentences are ranked by combining statistical and linguistic features.\nThey introduce normalized measures of recall and precision to facilitate evaluation.\n2.3 Clickthroughs Queries and clickthroughs taken from the logs of commercial Web search engines have been widely used to improve the performance of these systems and to better understand how users interact with them.\nIn early work, Broder [3] examined the logs of the AltaVista search engine and identified three broad categories of Web queries: informational, navigational and transactional.\nRose and Levinson [13] conducted a similar study, developing a hierarchy of query goals with three top-level categories: informational, navigational and resource.\nUnder their taxonomy, a transactional query as defined by Broder might fall under either of their three categories, depending on details of the desired transaction.\nLee et al. [10] used clickthrough patterns to automatically categorize queries into one of two categories: informational - for which multiple Websites may satisfy all or part of the user``s need - and navigational - for which users have a particular Website in mind.\nUnder their taxonomy, a transactional or resource query would be subsumed under one of these two categories.\nAgichtein et al. interpreted caption features, clickthroughs and other user behavior as implicit feedback to learn preferences [2] and improve ranking [1] in Web search.\nXue et al. [18] present several methods for associating queries with documents by analyzing clickthrough patterns and links between documents.\nQueries associated with documents in this way are treated as meta-data.\nIn effect, they are added to the document content for indexing and ranking purposes.\nOf particular interest to us is the work of Joachims et al. [9] and Granka et al. [7].\nThey conducted eye-tracking studies and analyzed log data to determine the extent to which clickthrough data may be treated as implicit relevance judgments.\nThey identified a trust bias, which leads users to prefer the higher ranking result when all other factors are equal.\nIn addition, they explored techniques that treat clicks as pairwise preferences.\nFor example, a click at position N + 1 - after skipping the result at position N - may be viewed as a preference for the result at position N+1 relative to the result at position N.\nThese findings form the basis of the clickthrough inversion methodology we use to interpret user interactions with search results.\nOur examination of large search logs compliments their detailed analysis of a smaller number of participants.\n3.\nCLICKTHROUGH INVERSIONS While other researchers have evaluated the display of Web search results through user studies - presenting users with a small number of different techniques and asking them to complete experimental tasks - we approach the problem by extracting implicit feedback from search engine logs.\nExamining user behavior in situ allows us to consider many more queries and caption characteristics, with the volume of available data compensating for the lack of a controlled lab environment.\nThe problem remains of interpreting the information in these logs as implicit indicators of user preferences, and in this matter we are guided by the work of Joachims et al. [9].\nWe consider caption pairs, which appear adjacent to one another in the result list.\nOur primary tool for examining the influence of caption features is a type of pattern observed with respect to these caption pairs, which we call a clickthrough inversion.\nA clickthrough inversion occurs at position N when the result at position N receives fewer clicks than the result at position N + 1.\nFollowing Joachims et al. [9], we interpret a clickthrough inversion as indicating a preference for the lower ranking result, overcoming any trust bias.\nFor simplicity, in the remainder of this paper we refer to the higher ranking caption in a pair as caption A and the lower ranking caption as caption B. 3.1 Extracting clickthroughs For the experiments reported in this paper, we sampled a subset of the queries and clickthroughs from the logs of the Windows Live search engine over a period of 3-4 days on three separate occasions: once for results reported in section 3.3, once for a pilot of our main experiment, and once for the experiment itself (sections 4 and 5).\nFor simplicity we restricted our sample to queries submitted to the US English interface and ignored any queries containing complex or non-alphanumeric terms (e.g. operators and phrases).\nAt the end of each sampling period, we downloaded captions for the queries associated with the clickthrough sample.\nWhen identifying clickthroughs in search engine logs, we consider only the first clickthrough action taken by a user after entering a query and viewing the result page.\nUsers are identified by IP address, which is a reasonably reliable method of eliminating multiple results from a single user, at the cost of falsely eliminating results from multiple users sharing the same address.\nBy focusing on the initial clickthrough, we hope to capture a user``s impression of the relative relevance within a caption pair when first encountered.\nIf the user later clicks on other results or re-issues the same query, we ignore these actions.\nAny preference captured by a clickthrough inversion is therefore a preference among a group of users issuing a particular query, rather than a preference on the part of a single user.\nIn the remainder of the paper, we use the term clickthrough to refer only to this initial action.\nGiven the dynamic nature of the Web and the volumes of data involved, search engine logs are bound to contain considerable noise.\nFor example, even over a period of hours or minutes the order of results for a given query can change, with some results dropping out of the top ten and new ones appearing.\nFor this reason, we retained clickthroughs for a specific combination of a query and a result only if this result appears in a consistent position for at least 50% of the clickthroughs.\nClickthroughs for the same result when it appeared at other positions were discarded.\nFor similar reasons, if we did not detect at least ten clickthroughs for a particular query during the sampling period, no clickthroughs for that query were retained.\n10 20 30 40 50 60 70 80 90 100 1 2 3 4 5 6 7 8 9 10 clickthroughpercent position a) craigslist 10 20 30 40 50 60 70 80 90 100 1 2 3 4 5 6 7 8 9 10 clickthroughpercent position b) periodic table of elements 10 20 30 40 50 60 70 80 90 100 1 2 3 4 5 6 7 8 9 10 clickthroughpercent position c) kids online games Figure 2: Clickthrough curves for three queries: a) a stereotypical navigational query, b) a stereotypical informational query, and c) a query exhibiting clickthrough inversions.\nThe outcome at the end of each sampling period is a set of records, with each record describing the clickthroughs for a given query/result combination.\nEach record includes a query, a result position, a title, a snippet, a URL, the number of clickthroughs for this result, and the total number of clickthroughs for this query.\nWe then processed this set to generate clickthrough curves and identify inversions.\n3.2 Clickthrough curves It could be argued that under ideal circumstances, clickthrough inversions would not be present in search engine logs.\nA hypothetical perfect search engine would respond to a query by placing the result most likely to be relevant first in the result list.\nEach caption would appropriately summarize the content of the linked page and its relationship to the query, allowing users to make accurate judgments.\nLater results would complement earlier ones, linking to novel or supplementary material, and ordered by their interest to the greatest number of users.\nFigure 2 provides clickthrough curves for three example queries.\nFor each example, we plot the percentage of clickthroughs against position for the top ten results.\nThe first query (craigslist) is stereotypically navigational, showing a spike at the correct answer (www.craigslist.org).\nThe second query is informational in the sense of Lee et al. [10] (periodic table of elements).\nIts curve is flatter and less skewed toward a single result.\nFor both queries, the number of clickthroughs is consistent with the result positions, with the percentage of clickthroughs decreasing monotonically as position increases, the ideal behavior.\nRegrettably, no search engine is perfect, and clickthrough inversions are seen for many queries.\nFor example, for the third query (kids online games) the clickthrough curve exhibits a number of clickthrough inversions, with an apparent preference for the result at position 4.\nSeveral causes may be enlisted to explain the presence of an inversion in a clickthrough curve.\nThe search engine may have failed in its primary goal, ranking more relevant results below less relevant results.\nEven when the relative ranking is appropriate, a caption may fail to reflect the content of the underlying page with respect to the query, leading the user to make an incorrect judgment.\nBefore turning to the second case, we address the first, and examine the extent to which relevance alone may explain these inversions.\n3.3 Relevance The simplest explanation for the presence of a clickthrough inversion is a relevance difference between the higher ranking member of caption pair and the lower ranking member.\nIn order to examine the extent to which relevance plays a role in clickthrough inversions, we conducted an initial experiment using a set of 1,811 queries with associated judgments created as part of on-going work.\nOver a four-day period, we sampled the search engine logs and extracted over one hundred thousand clicks involving these queries.\nFrom these clicks we identified 355 clickthrough inversions, satisfying the criteria of section 3.1, where relevance judgments existed for both pages.\nThe relevance judgments were made by independent assessors viewing the pages themselves, rather than the captions.\nRelevance was assessed on a 6-point scale.\nThe outcome is presented in figure 3, which shows the explicit judgments for the 355 clickthrough inversions.\nIn all of these cases, there were more clicks on the lower ranked member of the Relationship Number Percent rel(A) < rel(B) 119 33.5% rel(A) = rel(B) 134 37.7% rel(A) > rel(B) 102 28.7% Figure 3: Relevance relationships at clickthrough inversions.\nCompares relevance between the higher ranking member of a caption pair (rel(A)) to the relevance of the lower ranking member (rel(B)), where caption A received fewer clicks than caption B. pair (B).\nThe figure shows the corresponding relevance judgments.\nFor example, the first row rel(A) < rel(B), indicates that the higher ranking member of pair (A) was rated as less relevant than the lower ranking member of the pair (B).\nAs we see in the figure, relevance alone appears inadequate to explain the majority of clickthrough inversions.\nFor twothirds of the inversions (236), the page associated with caption A is at least as relevant as the page associated with caption B. For 28.7% of the inversions, A has greater relevance than B, which received the greater number of clickthroughs.\n4.\nINFLUENCE OF CAPTION FEATURES Having demonstrated that clickthrough inversions cannot always be explained by relevance differences, we explore what features of caption pairs, if any, lead users to prefer one caption over another.\nFor example, we may hypothesize that the absence of a snippet in caption A and the presence of a snippet in caption B (e.g. captions 2 and 3 in figure 1) leads users to prefer caption A. Nonetheless, due to competing factors, a large set of clickthrough inversions may also include pairs where the snippet is missing in caption B and not in caption A. However, if we compare a large set of clickthrough inversions to a similar set of pairs for which the clickthroughs are consistent with their ranking, we would expect to see relatively more pairs where the snippet was missing in caption A. 4.1 Evaluation methodology Following this line of reasoning, we extracted two sets of caption pairs from search logs over a three day period.\nThe first is a set of nearly five thousand clickthrough inversions, extracted according to the procedure described in section 3.1.\nThe second is a corresponding set of caption pairs that do not exhibit clickthrough inversions.\nIn other words, for pairs in this set, the result at the higher rank (caption A) received more clickthroughs than the result at the lower rank (caption B).\nTo the greatest extent possible, each pair in the second set was selected to correspond to a pair in the first set, in terms of result position and number of clicks on each result.\nWe refer to the first set, containing clickthrough inversions, as the INV set; we refer to the second set, containing caption pairs for which the clickthroughs are consistent with their rank order, as the CON set.\nWe extract a number of features characterizing snippets (described in detail in the next section) and compare the presence of each feature in the INV and CON sets.\nWe describe the features as a hypothesized preference (e.g., a preference for captions containing a snippet).\nThus, in either set, a given feature may be present in one of two forms: favoring the higher ranked caption (caption A) or favoring the lower ranked caption (caption B).\nFor example, the abFeature Tag Description MissingSnippet snippet missing in caption A and present in caption B SnippetShort short snippet in caption A (< 25 characters) with long snippet (> 100 characters) in caption B TermMatchTitle title of caption A contains matches to fewer query terms than the title of caption B TermMatchTS title+snippet of caption A contains matches to fewer query terms than the title+snippet of caption B TermMatchTSU title+snippet+URL of caption A contains matches to fewer query terms than caption B TitleStartQuery title of caption B (but not A) starts with a phrase match to the query QueryPhraseMatch title+snippet+url contains the query as a phrase match MatchAll caption B contains one match to each term; caption A contains more matches with missing terms URLQuery caption B URL is of the form www.query.com where the query matches exactly with spaces removed URLSlashes caption A URL contains more slashes (i.e. a longer path length) than the caption B URL URLLenDIff caption A URL is longer than the caption B URL Official title or snippet of caption B (but not A) contains the term official (with stemming) Home title or snippet of caption B (but not A) contains the phrase home page Image title or snippet of caption B (but not A) contains a term suggesting the presence of an image gallery Readable caption B (but not A) passes a simple readability test Figure 4: Features measured in caption pairs (caption A and caption B), with caption A as the higher ranked result.\nThese features are expressed from the perspective of the prevalent relationship predicted for clickthrough inversions.\nsence of a snippet in caption A favors caption B, and the absence of a snippet in caption B favors caption A.\nWhen the feature favors caption B (consistent with a clickthrough inversion) we refer to the caption pair as a positive pair.\nWhen the feature favors caption A, we refer to it as a negative pair.\nFor missing snippets, a positive pair has the caption missing in caption A (but not B) and a negative pair has the caption missing in B (but not A).\nThus, for a specific feature, we can construct four subsets: 1) INV+, the set of positive pairs from INV; 2) INV\u2212, the set of negative pairs from INV; 3) CON+; the set of positive pairs from CON; and 4) CON\u2212 the set of negative pairs from CON.\nThe sets INV+, INV\u2212, CON+, and CON\u2212 will contain different subsets of INV and CON for each feature.\nWhen stating a feature corresponding to a hypothesized user preference, we follow the practice of stating the feature with the expectation that the size of INV+ relative to the size of INV\u2212 should be greater than the size of CON+ relative to the size of CON\u2212.\nFor example, we state the missing snippet feature as snippet missing in caption A and present in caption B.\nThis evaluation methodology allows us to construct a contingency table for each feature, with INV essentially forming the experimental group and CON the control group.\nWe can then apply Pearson``s chi-square test for significance.\n4.2 Features Figure 4 lists the features tested.\nMany of the features on this list correspond to our own assumptions regarding the importance of certain caption characteristics: the presence of query terms, the inclusion of a snippet, and the importance of query term matches in the title.\nOther features suggested themselves during the examination of the snippets collected as part of the study described in section 3.3 and during a pilot of the evaluation methodology (section 4.1).\nFor this pilot we collected INV and CON sets of similar sizes, and used these sets to evaluate a preliminary list of features and to establish appropriate parameters for the SnippetShort and Readable features.\nIn the pilot, all of the features list in figure 4 were significant at the 95% level.\nA small number of other features were dropped after the pilot.\nThese features all capture simple aspects of the captions.\nThe first feature concerns the existence of a snippet and the second concerns the relative size of snippets.\nApart from this first feature, we ignore pairs where one caption has a missing snippet.\nThese pairs are not included in the sets constructed for the remaining features, since captions with missing snippets do not contain all the elements of a standard caption and we wanted to avoid their influence.\nThe next six features concern the location and number of matching query terms.\nFor the first five, a match for each query term is counted only once, additional matches for the same term are ignored.\nThe MatchAll feature tests the idea that matching all the query terms exactly once is preferable to matching a subset of the terms many times with a least one query term unmatched.\nThe next three features concern the URLs, capturing aspects of their length and complexity, and the last four features concern caption content.\nThe first two of these content features (Official and Home) suggest claims about the importance or significance of the associated page.\nThe third content feature (Image) suggests the presence of an image gallery, a popular genre of Web page.\nTerms represented by this feature include pictures, pics, and gallery.\nThe last content feature (Readable) applies an ad-hoc readability metric to each snippet.\nRegular users of Web search engines may notice occasional snippets that consist of little more than lists of words and phrases, rather than a coherent description.\nWe define our own metric, since the Flesch-Kincaid readability score and similar measures are intended for entire documents not text fragments.\nWhile the metric has not been experimentally validated, it does reflect our intuitions and observations regarding result snippets.\nIn English, the 100 most frequent words represent about 48% of text, and we would expect readable prose, as opposed to a disjointed list of words, to contain these words in roughly this proportion.\nThe Readable feature computes the percentage of these top-100 words appearing in each caption.\nIf these words represent more than 40% of one caption and less than 10% of the other, the pair is included in the appropriate set.\nFeature Tag INV+ INV\u2212 %+ CON+ CON\u2212 %+ \u03c72 p-value MissingSnippet 185 121 60.4 144 133 51.9 4.2443 0.0393 SnippetShort 20 6 76.9 12 16 42.8 6.4803 0.0109 TermMatchTitle 800 559 58.8 660 700 48.5 29.2154 <.0001 TermMatchTS 310 213 59.2 269 216 55.4 1.4938 0.2216 TermMatchTSU 236 138 63.1 189 149 55.9 3.8088 0.0509 TitleStartQuery 1058 933 53.1 916 1096 45.5 23.1999 <.0001 QueryPhraseMatch 465 346 57.3 427 422 50.2 8.2741 0.0040 MatchAll 8 2 80.0 1 4 20.0 0.0470 URLQuery 277 188 59.5 159 315 33.5 63.9210 <.0001 URLSlashes 1715 1388 55.2 1380 1758 43.9 79.5819 <.0001 URLLenDiff 2288 2233 50.6 2062 2649 43.7 43.2974 <.0001 Official 215 142 60.2 133 215 38.2 34.1397 <.0001 Home 62 49 55.8 64 82 43.8 3.6458 0.0562 Image 391 270 59.1 315 335 48.4 15.0735 <.0001 Readable 52 43 54.7 31 48 39.2 4.1518 0.0415 Figure 5: Results corresponding to the features listed in figure 4 with \u03c72 and p-values (df = 1).\nFeatures supported at the 95% confidence level are bolded.\nThe p-value for the MatchAll feature is computed using Fisher``s Exact Test.\n4.3 Results Figure 5 presents the results.\nEach row lists the size of the four sets (INV+, INV\u2212, CON+, and CON\u2212) for a given feature and indicates the percentage of positive pairs (%+) for INV and CON.\nIn order to reject the null hypothesis, this percentage should be significantly greater for INV than CON.\nExcept in one case, we applied the chi-squared test of independence to these sizes, with p-values shown in the last column.\nFor the MatchAll feature, where the sum of the set sizes is 15, we applied Fisher``s exact test.\nFeatures supported at the 95% confidence level are bolded.\n5.\nCOMMENTARY The results support claims that missing snippets, short snippets, missing query terms and complex URLs negatively impact clickthroughs.\nWhile this outcome may not be surprising, we are aware of no other work that can provide support for claims of this type in the context of a commercial Web search engine.\nThis work was originally motivated by our desire to validate some simple guidelines for the generation of captionssummarizing opinions that we formulated while working on related issues.\nWhile our results do not direct address all of the many variables that influence users understanding of captions, they are consistent with the major guidelines.\nFurther work is needed to provide additional support for the guidelines and to understand the relationships among variables.\nThe first of these guidelines underscores the importance of displaying query terms in context: Whenever possible all of the query terms should appear in the caption, reflecting their relationship to the associated page.\nIf a query term is missing from a caption, the user may have no idea why the result was returned.\nThe results for the MatchAll feature directly support this guideline.\nThe results for TermMatchTitle and TermMatchTSU confirm that matching more terms is desirable.\nOther features provide additional indirect support for this guideline, and none of the results are inconsistent with it.\nA second guideline speaks to the desirability of presenting the user with a readable snippet: When query terms are present in the title, they need not be repeated in the snippet.\nIn particular, when a high-quality query-independent summary is available from an external source, such as a Web directory, it may be more appropriate to display this summary than a lower-quality query-dependent fragment selected on-the-fly.\nWhen titles are available from multiple sources -the header, the body, Web directories - a caption generation algorithm might a select a combination of title, snippet and URL that includes as many of the query terms as possible.\nWhen a title containing all query terms can be found, the algorithm might select a query-independent snippet.\nThe MatchAll and Readable features directly support this guideline.\nOnce again, other features provide indirect support, and none of the results are inconsistent with it.\nFinally, the length and complexity of a URL influences user behavior.\nWhen query terms appear in the URL they should highlighted or otherwise distinguished.\nWhen multiple URLs reference the same page (due to re-directions, etc.) the shortest URL should be preferred, provided that all query terms will still appear in the caption.\nIn other words, URLs should be selected and displayed in a manner that emphasizes their relationship to the query.\nThe three URL features, as well as TermMatchTSU, directly support this guideline.\nThe influence of the Official and Image features led us to wonder what other terms are prevalent in the captions of clickthrough inversions.\nAs an additional experiment, we treated each of the terms appearing in the INV and CON sets as a separate feature (case normalized), ranking them by their \u03c72 values.\nThe results are presented in figure 6.\nSince we use the \u03c72 statistic as a divergence measure, rather than a significance test, no p-values are given.\nThe final column of the table indicates the direction of the influence, whether the presence of the terms positively or negatively influence clickthroughs.\nThe positive influence of official has already been observed (the difference in the \u03c72 value from that of figure 5 is due to stemming).\nNone of the terms included in the Image Rank Term \u03c72 influence 1 encyclopedia 114.6891 \u2193 2 wikipedia 94.0033 \u2193 3 official 36.5566 \u2191 4 and 28.3349 \u2191 5 tourism 25.2003 \u2191 6 attractions 24.7283 \u2191 7 free 23.6529 \u2193 8 sexy 21.9773 \u2191 9 medlineplus 19.9726 \u2193 10 information 19.9115 \u2191 Figure 6: Words exhibiting the greatest positive (\u2191) and negative (\u2193) influence on clickthrough patterns.\nfeature appear in the top ten, but pictures and photos appear at positions 21 and 22.\nThe high rank given to and may be related to readability (the term the appears in position 20).\nMost surprising to us is the negative influence of the terms: encyclopedia, wikipedia, free, and medlineplus.\nThe first three terms appear in the title of Wikipedia articles3 and the last appears in the title of MedlinePlus articles4 .\nThese individual word-level features provide hints about issues.\nMore detailed analyses and further experiments will be required to understand these features.\n6.\nCONCLUSIONS Clickthrough inversions form an appropriate tool for assessing the influence of caption features.\nUsing clickthrough inversions, we have demonstrated that relatively simple caption features can significantly influence user behavior.\nTo our knowledge, this is first methodology validated for assessing the quality of Web captions through implicit feedback.\nIn the future, we hope to substantially expand this work, considering more features over larger datasets.\nWe also hope to directly address the goal of predicting relevance from clickthoughs and other information present in search engine logs.\n7.\nACKNOWLEDGMENTS This work was conducted while the first author was visiting Microsoft Research.\nThe authors thank members of the Windows Live team for their comments and assistance, particularly Girish Kumar, Luke DeLorme, Rohit Wad and Ramez Naam.\n8.\nREFERENCES [1] E. Agichtein, E. Brill, and S. Dumais.\nImproving web search ranking by incorporating user behavior information.\nIn 29th ACM SIGIR, pages 19-26, Seattle, August 2006.\n[2] E. Agichtein, E. Brill, S. Dumais, and R. Ragno.\nLearning user interaction models for predicting Web search result preferences.\nIn 29th ACM SIGIR, pages 3-10, Seattle, August 2006.\n[3] A. Broder.\nA taxonomy of Web search.\nSIGIR Forum, 36(2):3-10, 2002.\n3 www.wikipedia.org 4 www.nlm.nih.gov/medlineplus/ [4] E. Cutrell and Z. Guan.\nWhat are you looking for?\nAn eye-tracking study of information usage in Web search.\nIn SIGCHI Conference on Human Factors in Computing Systems, pages 407-416, San Jose, California, April-May 2007.\n[5] S. Dumais, E. Cutrell, and H. Chen.\nOptimizing search by showing results in context.\nIn SIGCHI Conference on Human Factors in Computing Systems, pages 277-284, Seattle, March-April 2001.\n[6] J. Goldstein, M. Kantrowitz, V. Mittal, and J. Carbonell.\nSummarizing text documents: Sentence selection and evaluation metrics.\nIn 22nd ACM SIGIR, pages 121-128, Berkeley, August 1999.\n[7] L. A. Granka, T. Joachims, and G. Gay.\nEye-tracking analysis of user behavior in WWW search.\nIn 27th ACM SIGIR, pages 478-479, Sheffield, July 2004.\n[8] Y. Hu, G. Xin, R. Song, G. Hu, S. Shi, Y. Cao, and H. Li.\nTitle extraction from bodies of HTML documents and its application to Web page retrieval.\nIn 28th ACM SIGIR, pages 250-257, Salvador, Brazil, August 2005.\n[9] T. Joachims, L. Granka, B. Pan, H. Hembrooke, and G. Gay.\nAccurately interpreting clickthrough data as implicit feedback.\nIn 28th ACM SIGIR, pages 154-161, Salvador, Brazil, August 2005.\n[10] U. Lee, Z. Liu, and J. Cho.\nAutomatic identification of user goals in Web search.\nIn 14th International World Wide Web Conference, pages 391-400, Edinburgh, May 2005.\n[11] H. P. Luhn.\nThe automatic creation of literature abstracts.\nIBM Journal of Research and Development, 2(2):159-165, April 1958.\n[12] T. Paek, S. Dumais, and R. Logan.\nWaveLens: A new view onto Internet search results.\nIn SIGCHI Conference on Human Factors in Computing Systems, pages 727-734, Vienna, Austria, April 2004.\n[13] D. Rose and D. Levinson.\nUnderstanding user goals in Web search.\nIn 13th International World Wide Web Conference, pages 13-19, New York, May 2004.\n[14] J.-T.\nSun, D. Shen, H.-J.\nZeng, Q. Yang, Y. Lu, and Z. Chen.\nWeb-page summarization using clickthrough data.\nIn 28th ACM SIGIR, pages 194-201, Salvador, Brazil, August 2005.\n[15] A. Tombros and M. Sanderson.\nAdvantages of query biased summaries in information retrieval.\nIn 21st ACM SIGIR, pages 2-10, Melbourne, Australia, August 1998.\n[16] R. Varadarajan and V. Hristidis.\nA system for query-specific document summarization.\nIn 15th ACM international conference on Information and knowledge management (CIKM), pages 622-631, Arlington, Virginia, November 2006.\n[17] R. W. White, I. Ruthven, and J. M. Jose.\nFinding relevant documents using top ranking sentences: An evaluation of two alternative schemes.\nIn 25th ACM SIGIR, pages 57-64, Tampere, Finland, August 2002.\n[18] G.-R.\nXue, H.-J.\nZeng, Z. Chen, Y. Yu, W.-Y.\nMa, W. Xi, and W. Fan.\nOptimizing web search using Web click-through data.\nIn 13th ACM Conference on Information and Knowledge Management (CIKM), pages 118-126, Washington, DC, November 2004.", "lvl-3": "The Influence of Caption Features on Clickthrough Patterns in Web Search\nABSTRACT\nWeb search engines present lists of captions , comprising title , snippet , and URL , to help users decide which search results to visit .\nUnderstanding the influence of features of these captions on Web search behavior may help validate algorithms and guidelines for their improved generation .\nIn this paper we develop a methodology to use clickthrough logs from a commercial search engine to study user behavior when interacting with search result captions .\nThe findings of our study suggest that relatively simple caption features such as the presence of all terms query terms , the readability of the snippet , and the length of the URL shown in the caption , can significantly influence users ' Web search behavior .\n1 .\nINTRODUCTION\nThe major commercial Web search engines all present their results in much the same way .\nEach search result is described by a brief caption , comprising the URL of the associated Web page , a title , and a brief summary ( or `` snippet '' ) describing the contents of the page .\nOften the snippet is extracted from the Web page itself , but it may also be taken from external sources , such as the human-generated summaries found in Web directories .\nFigure 1 shows a typical Web search , with captions for the top three results .\nWhile the three captions share the same\nbasic structure , their content differs in several respects .\nThe snippet of the third caption is nearly twice as long as that of the first , while the snippet is missing entirely from the second caption .\nThe title of the third caption contains all of the query terms in order , while the titles of the first and second captions contain only two of the three terms .\nOne of the query terms is repeated in the first caption .\nAll of the query terms appear in the URL of the third caption , while none appear in the URL of the first caption .\nThe snippet of the first caption consists of a complete sentence that concisely describes the associated page , while the snippet of the third caption consists of two incomplete sentences that are largely unrelated to the overall contents of the associated page and to the apparent intent of the query .\nWhile these differences may seem minor , they may also have a substantial impact on user behavior .\nA principal motivation for providing a caption is to assist the user in determining the relevance of the associated page without actually having to click through to the result .\nIn the case of a navigational query -- particularly when the destination is well known -- the URL alone may be sufficient to identify the desired page .\nBut in the case of an informational query , the title and snippet may be necessary to guide the user in selecting a page for further study , and she may judge the relevance of a page on the basis of the caption alone .\nWhen this judgment is correct , it can speed the search process by allowing the user to avoid unwanted material .\nWhen it fails , the user may waste her time clicking through to an inappropriate result and scanning a page containing little or nothing of interest .\nEven worse , the user may be misled into skipping a page that contains desired information .\nAll three of the results in figure 1 are relevant , with some limitations .\nThe first result links to the main Yahoo Kids !\nhomepage , but it is then necessary to follow a link in a menu to find the main page for games .\nDespite appearances , the second result links to a surprisingly large collection of online games , primarily with environmental themes .\nThe third result might be somewhat disappointing to a user , since it leads to only a single game , hosted at the Centers for Disease Control , that could not reasonably be described as `` online '' .\nUnfortunately , these page characteristics are not entirely reflected in the captions .\nIn this paper , we examine the influence of caption features on user 's Web search behavior , using clickthroughs extracted from search engines logs as our primary investigative tool .\nUnderstanding this influence may help to validate algorithms and guidelines for the improved generation of the\nFigure 1 : Top three results for the query : kids online games .\ncaptions themselves .\nIn addition , these features can play a role in the process of inferring relevance judgments from user behavior [ 1 ] .\nBy better understanding their influence , better judgments may result .\nDifferent caption generation algorithms might select snippets of different lengths from different areas of a page .\nSnippets may be generated in a query-independent fashion , providing a summary of the page as a whole , or in a querydependent fashion , providing a summary of how the page relates to the query terms .\nThe correct choice of snippet may depend on aspects of both the query and the result page .\nThe title may be taken from the HTML header or extracted from the body of the document [ 8 ] .\nFor links that re-direct , it may be possible to display alternative URLs .\nMoreover , for pages listed in human-edited Web directories such as the Open Directory Project ' , it may be possible to display alternative titles and snippets derived from these listings .\nWhen these alternative snippets , titles and URLs are available , the selection of an appropriate combination for display may be guided by their features .\nA snippet from a Web directory may consist of complete sentences and be less fragmentary than an extracted snippet .\nA title extracted from the body may provide greater coverage of the query terms .\nA URL before re-direction may be shorter and provide a clearer idea of the final destination .\nThe work reported in this paper was undertaken in the context of the Windows Live search engine .\nThe image in figure 1 was captured from Windows Live and cropped to eliminate branding , advertising and navigational elements .\nThe experiments reported in later sections are based on Windows Live query logs , result pages and relevance judgments collected as part of ongoing research into search engine performance [ 1 , 2 ] .\nNonetheless , given the similarity of caption formats across the major Web search engines we believe the results are applicable to these other engines .\nThe query in ` www.dmoz.org figure 1 produces results with similar relevance on the other major search engines .\nThis and other queries produce captions that exhibit similar variations .\nIn addition , we believe our methodology may be generalized to other search applications when sufficient clickthrough data is available .\n2 .\nRELATED WORK\nWhile commercial Web search engines have followed similar approaches to caption display since their genesis , relatively little research has been published about methods for generating these captions and evaluating their impact on user behavior .\nMost related research in the area of document summarization has focused on newspaper articles and similar material , rather than Web pages , and has conducted evaluations by comparing automatically generated summaries with manually generated summaries .\nMost research on the display of Web results has proposed substantial interface changes , rather than addressing details of the existing interfaces .\n2.1 Display of Web results\nVaradarajan and Hristidis [ 16 ] are among the few who have attempted to improve directly upon the snippets generated by commercial search systems , without introducing additional changes to the interface .\nThey generated snippets from spanning trees of document graphs and experimentally compared these snippets against the snippets generated for the same documents by the Google desktop search system and MSN desktop search system .\nThey evaluated their method by asking users to compare snippets from the various sources .\nCutrell and Guan [ 4 ] conducted an eye-tracking study to investigate the influence of snippet length on Web search performance and found that the optimal snippet length varied according to the task type , with longer snippets leading to improved performance for informational tasks and shorter snippets for navigational tasks .\nSIGIR 2007 Proceedings Session 6 : Summaries\n2.2 Document summarization\n2.3 Clickthroughs\n3 .\nCLICKTHROUGH INVERSIONS\nSIGIR 2007 Proceedings Session 6 : Summaries\n3.1 Extracting clickthroughs\n3.2 Clickthrough curves\n3.3 Relevance\n4 .\nINFLUENCE OF CAPTION FEATURES\n4.1 Evaluation methodology\n4.2 Features\n4.3 Results\n5 .\nCOMMENTARY\n6 .\nCONCLUSIONS\nClickthrough inversions form an appropriate tool for assessing the influence of caption features .\nUsing clickthrough inversions , we have demonstrated that relatively simple caption features can significantly influence user behavior .\nTo our knowledge , this is first methodology validated for assessing the quality of Web captions through implicit feedback .\nIn the future , we hope to substantially expand this work , considering more features over larger datasets .\nWe also hope to directly address the goal of predicting relevance from clickthoughs and other information present in search engine logs .", "lvl-4": "The Influence of Caption Features on Clickthrough Patterns in Web Search\nABSTRACT\nWeb search engines present lists of captions , comprising title , snippet , and URL , to help users decide which search results to visit .\nUnderstanding the influence of features of these captions on Web search behavior may help validate algorithms and guidelines for their improved generation .\nIn this paper we develop a methodology to use clickthrough logs from a commercial search engine to study user behavior when interacting with search result captions .\nThe findings of our study suggest that relatively simple caption features such as the presence of all terms query terms , the readability of the snippet , and the length of the URL shown in the caption , can significantly influence users ' Web search behavior .\n1 .\nINTRODUCTION\nThe major commercial Web search engines all present their results in much the same way .\nEach search result is described by a brief caption , comprising the URL of the associated Web page , a title , and a brief summary ( or `` snippet '' ) describing the contents of the page .\nOften the snippet is extracted from the Web page itself , but it may also be taken from external sources , such as the human-generated summaries found in Web directories .\nFigure 1 shows a typical Web search , with captions for the top three results .\nWhile the three captions share the same\nThe snippet of the third caption is nearly twice as long as that of the first , while the snippet is missing entirely from the second caption .\nThe title of the third caption contains all of the query terms in order , while the titles of the first and second captions contain only two of the three terms .\nOne of the query terms is repeated in the first caption .\nAll of the query terms appear in the URL of the third caption , while none appear in the URL of the first caption .\nWhile these differences may seem minor , they may also have a substantial impact on user behavior .\nA principal motivation for providing a caption is to assist the user in determining the relevance of the associated page without actually having to click through to the result .\nIn the case of a navigational query -- particularly when the destination is well known -- the URL alone may be sufficient to identify the desired page .\nBut in the case of an informational query , the title and snippet may be necessary to guide the user in selecting a page for further study , and she may judge the relevance of a page on the basis of the caption alone .\nWhen this judgment is correct , it can speed the search process by allowing the user to avoid unwanted material .\nWhen it fails , the user may waste her time clicking through to an inappropriate result and scanning a page containing little or nothing of interest .\nEven worse , the user may be misled into skipping a page that contains desired information .\nAll three of the results in figure 1 are relevant , with some limitations .\nThe first result links to the main Yahoo Kids !\nhomepage , but it is then necessary to follow a link in a menu to find the main page for games .\nDespite appearances , the second result links to a surprisingly large collection of online games , primarily with environmental themes .\nUnfortunately , these page characteristics are not entirely reflected in the captions .\nIn this paper , we examine the influence of caption features on user 's Web search behavior , using clickthroughs extracted from search engines logs as our primary investigative tool .\nUnderstanding this influence may help to validate algorithms and guidelines for the improved generation of the\nFigure 1 : Top three results for the query : kids online games .\ncaptions themselves .\nIn addition , these features can play a role in the process of inferring relevance judgments from user behavior [ 1 ] .\nBy better understanding their influence , better judgments may result .\nDifferent caption generation algorithms might select snippets of different lengths from different areas of a page .\nSnippets may be generated in a query-independent fashion , providing a summary of the page as a whole , or in a querydependent fashion , providing a summary of how the page relates to the query terms .\nThe correct choice of snippet may depend on aspects of both the query and the result page .\nFor links that re-direct , it may be possible to display alternative URLs .\nMoreover , for pages listed in human-edited Web directories such as the Open Directory Project ' , it may be possible to display alternative titles and snippets derived from these listings .\nWhen these alternative snippets , titles and URLs are available , the selection of an appropriate combination for display may be guided by their features .\nA snippet from a Web directory may consist of complete sentences and be less fragmentary than an extracted snippet .\nA title extracted from the body may provide greater coverage of the query terms .\nThe work reported in this paper was undertaken in the context of the Windows Live search engine .\nThe experiments reported in later sections are based on Windows Live query logs , result pages and relevance judgments collected as part of ongoing research into search engine performance [ 1 , 2 ] .\nNonetheless , given the similarity of caption formats across the major Web search engines we believe the results are applicable to these other engines .\nThe query in ` www.dmoz.org figure 1 produces results with similar relevance on the other major search engines .\nThis and other queries produce captions that exhibit similar variations .\nIn addition , we believe our methodology may be generalized to other search applications when sufficient clickthrough data is available .\n2 .\nRELATED WORK\nWhile commercial Web search engines have followed similar approaches to caption display since their genesis , relatively little research has been published about methods for generating these captions and evaluating their impact on user behavior .\nMost research on the display of Web results has proposed substantial interface changes , rather than addressing details of the existing interfaces .\n2.1 Display of Web results\nVaradarajan and Hristidis [ 16 ] are among the few who have attempted to improve directly upon the snippets generated by commercial search systems , without introducing additional changes to the interface .\nThey generated snippets from spanning trees of document graphs and experimentally compared these snippets against the snippets generated for the same documents by the Google desktop search system and MSN desktop search system .\nThey evaluated their method by asking users to compare snippets from the various sources .\n6 .\nCONCLUSIONS\nClickthrough inversions form an appropriate tool for assessing the influence of caption features .\nUsing clickthrough inversions , we have demonstrated that relatively simple caption features can significantly influence user behavior .\nTo our knowledge , this is first methodology validated for assessing the quality of Web captions through implicit feedback .\nWe also hope to directly address the goal of predicting relevance from clickthoughs and other information present in search engine logs .", "lvl-2": "The Influence of Caption Features on Clickthrough Patterns in Web Search\nABSTRACT\nWeb search engines present lists of captions , comprising title , snippet , and URL , to help users decide which search results to visit .\nUnderstanding the influence of features of these captions on Web search behavior may help validate algorithms and guidelines for their improved generation .\nIn this paper we develop a methodology to use clickthrough logs from a commercial search engine to study user behavior when interacting with search result captions .\nThe findings of our study suggest that relatively simple caption features such as the presence of all terms query terms , the readability of the snippet , and the length of the URL shown in the caption , can significantly influence users ' Web search behavior .\n1 .\nINTRODUCTION\nThe major commercial Web search engines all present their results in much the same way .\nEach search result is described by a brief caption , comprising the URL of the associated Web page , a title , and a brief summary ( or `` snippet '' ) describing the contents of the page .\nOften the snippet is extracted from the Web page itself , but it may also be taken from external sources , such as the human-generated summaries found in Web directories .\nFigure 1 shows a typical Web search , with captions for the top three results .\nWhile the three captions share the same\nbasic structure , their content differs in several respects .\nThe snippet of the third caption is nearly twice as long as that of the first , while the snippet is missing entirely from the second caption .\nThe title of the third caption contains all of the query terms in order , while the titles of the first and second captions contain only two of the three terms .\nOne of the query terms is repeated in the first caption .\nAll of the query terms appear in the URL of the third caption , while none appear in the URL of the first caption .\nThe snippet of the first caption consists of a complete sentence that concisely describes the associated page , while the snippet of the third caption consists of two incomplete sentences that are largely unrelated to the overall contents of the associated page and to the apparent intent of the query .\nWhile these differences may seem minor , they may also have a substantial impact on user behavior .\nA principal motivation for providing a caption is to assist the user in determining the relevance of the associated page without actually having to click through to the result .\nIn the case of a navigational query -- particularly when the destination is well known -- the URL alone may be sufficient to identify the desired page .\nBut in the case of an informational query , the title and snippet may be necessary to guide the user in selecting a page for further study , and she may judge the relevance of a page on the basis of the caption alone .\nWhen this judgment is correct , it can speed the search process by allowing the user to avoid unwanted material .\nWhen it fails , the user may waste her time clicking through to an inappropriate result and scanning a page containing little or nothing of interest .\nEven worse , the user may be misled into skipping a page that contains desired information .\nAll three of the results in figure 1 are relevant , with some limitations .\nThe first result links to the main Yahoo Kids !\nhomepage , but it is then necessary to follow a link in a menu to find the main page for games .\nDespite appearances , the second result links to a surprisingly large collection of online games , primarily with environmental themes .\nThe third result might be somewhat disappointing to a user , since it leads to only a single game , hosted at the Centers for Disease Control , that could not reasonably be described as `` online '' .\nUnfortunately , these page characteristics are not entirely reflected in the captions .\nIn this paper , we examine the influence of caption features on user 's Web search behavior , using clickthroughs extracted from search engines logs as our primary investigative tool .\nUnderstanding this influence may help to validate algorithms and guidelines for the improved generation of the\nFigure 1 : Top three results for the query : kids online games .\ncaptions themselves .\nIn addition , these features can play a role in the process of inferring relevance judgments from user behavior [ 1 ] .\nBy better understanding their influence , better judgments may result .\nDifferent caption generation algorithms might select snippets of different lengths from different areas of a page .\nSnippets may be generated in a query-independent fashion , providing a summary of the page as a whole , or in a querydependent fashion , providing a summary of how the page relates to the query terms .\nThe correct choice of snippet may depend on aspects of both the query and the result page .\nThe title may be taken from the HTML header or extracted from the body of the document [ 8 ] .\nFor links that re-direct , it may be possible to display alternative URLs .\nMoreover , for pages listed in human-edited Web directories such as the Open Directory Project ' , it may be possible to display alternative titles and snippets derived from these listings .\nWhen these alternative snippets , titles and URLs are available , the selection of an appropriate combination for display may be guided by their features .\nA snippet from a Web directory may consist of complete sentences and be less fragmentary than an extracted snippet .\nA title extracted from the body may provide greater coverage of the query terms .\nA URL before re-direction may be shorter and provide a clearer idea of the final destination .\nThe work reported in this paper was undertaken in the context of the Windows Live search engine .\nThe image in figure 1 was captured from Windows Live and cropped to eliminate branding , advertising and navigational elements .\nThe experiments reported in later sections are based on Windows Live query logs , result pages and relevance judgments collected as part of ongoing research into search engine performance [ 1 , 2 ] .\nNonetheless , given the similarity of caption formats across the major Web search engines we believe the results are applicable to these other engines .\nThe query in ` www.dmoz.org figure 1 produces results with similar relevance on the other major search engines .\nThis and other queries produce captions that exhibit similar variations .\nIn addition , we believe our methodology may be generalized to other search applications when sufficient clickthrough data is available .\n2 .\nRELATED WORK\nWhile commercial Web search engines have followed similar approaches to caption display since their genesis , relatively little research has been published about methods for generating these captions and evaluating their impact on user behavior .\nMost related research in the area of document summarization has focused on newspaper articles and similar material , rather than Web pages , and has conducted evaluations by comparing automatically generated summaries with manually generated summaries .\nMost research on the display of Web results has proposed substantial interface changes , rather than addressing details of the existing interfaces .\n2.1 Display of Web results\nVaradarajan and Hristidis [ 16 ] are among the few who have attempted to improve directly upon the snippets generated by commercial search systems , without introducing additional changes to the interface .\nThey generated snippets from spanning trees of document graphs and experimentally compared these snippets against the snippets generated for the same documents by the Google desktop search system and MSN desktop search system .\nThey evaluated their method by asking users to compare snippets from the various sources .\nCutrell and Guan [ 4 ] conducted an eye-tracking study to investigate the influence of snippet length on Web search performance and found that the optimal snippet length varied according to the task type , with longer snippets leading to improved performance for informational tasks and shorter snippets for navigational tasks .\nSIGIR 2007 Proceedings Session 6 : Summaries\nMany researchers have explored alternative methods for displaying Web search results .\nDumais et al. [ 5 ] compared an interface typical of those used by major Web search engines with one that groups results by category , finding that users perform search tasks faster with the category interface .\nPaek et al. [ 12 ] propose an interface based on a fisheye lens , in which mouse hovers and other events cause captions to zoom and snippets to expand with additional text .\nWhite et al. [ 17 ] evaluated three alternatives to the standard Web search interface : one that displays expanded summaries on mouse hovers , one that displays a list of top ranking sentences extracted from the results taken as a group , and one that updates this list automatically through implicit feedback .\nThey treat the length of time that a user spends viewing a summary as an implicit indicator of relevance .\nTheir goal was to improve the ability of users to interact with a given result set , helping them to look beyond the first page of results and to reduce the burden of query re-formulation .\n2.2 Document summarization\nOutside the narrow context of Web search considerable related research has been undertaken on the problem of document summarization .\nThe basic idea of extractive summarization -- creating a summary by selecting sentences or fragments -- goes back to the foundational work of Luhn [ 11 ] .\nLuhn 's approach uses term frequencies to identify `` significant words '' within a document and then selects and extracts sentences that contain significant words in close proximity .\nA considerable fraction of later work may be viewed as extending and tuning this basic approach , developing improved methods for identifying significant words and selecting sentences .\nFor example , a recent paper by Sun et al. [ 14 ] describes a variant of Luhn 's algorithm that uses clickthrough data to identify significant words .\nAt its simplest , snippet generation for Web captions might also be viewed as following this approach , with query terms taking on the role of significant words .\nSince 2000 , the annual Document Understanding Conference ( DUC ) series , conducted by the US National Institute of Standards and Technology , has provided a vehicle for evaluating much of the research in document summarization2 .\nEach year DUC defines a methodology for one or more experimental tasks , and supplies the necessary test documents , human-created summaries , and automatically extracted baseline summaries .\nThe majority of participating systems use extractive summarization , but a number attempt natural language generation and other approaches .\nEvaluation at DUC is achieved through comparison with manually generated summaries .\nOver the years DUC has included both single-document summarization and multidocument summarization tasks .\nThe main DUC 2007 task is posed as taking place in a question answering context .\nGiven a topic and 25 documents , participants were asked to generate a 250-word summary satisfying the information need enbodied in the topic .\nWe view our approach of evaluating summarization through the analysis of Web logs as complementing the approach taken at DUC .\nA number of other researchers have examined the value of query-dependent summarization in a non-Web context .\nTombros and Sanderson [ 15 ] compared the performance of 20 subjects searching a collection of newspaper articles when 2duc .\nnist.gov guided by query-independent vs. query-dependent snippets .\nThe query-independent snippets were created by extracting the first few sentences of the articles ; the query-dependent snippets were created by selecting the highest scoring sentences under a measure biased towards sentences containing query terms .\nWhen query-dependent summaries were presented , subjects were better able to identify relevant documents without clicking through to the full text .\nGoldstein et al. [ 6 ] describe another extractive system for generating query-dependent summaries from newspaper articles .\nIn their system , sentences are ranked by combining statistical and linguistic features .\nThey introduce normalized measures of recall and precision to facilitate evaluation .\n2.3 Clickthroughs\nQueries and clickthroughs taken from the logs of commercial Web search engines have been widely used to improve the performance of these systems and to better understand how users interact with them .\nIn early work , Broder [ 3 ] examined the logs of the AltaVista search engine and identified three broad categories of Web queries : informational , navigational and transactional .\nRose and Levinson [ 13 ] conducted a similar study , developing a hierarchy of query goals with three top-level categories : informational , navigational and resource .\nUnder their taxonomy , a transactional query as defined by Broder might fall under either of their three categories , depending on details of the desired transaction .\nLee et al. [ 10 ] used clickthrough patterns to automatically categorize queries into one of two categories : informational -- for which multiple Websites may satisfy all or part of the user 's need -- and navigational -- for which users have a particular Website in mind .\nUnder their taxonomy , a transactional or resource query would be subsumed under one of these two categories .\nAgichtein et al. interpreted caption features , clickthroughs and other user behavior as implicit feedback to learn preferences [ 2 ] and improve ranking [ 1 ] in Web search .\nXue et al. [ 18 ] present several methods for associating queries with documents by analyzing clickthrough patterns and links between documents .\nQueries associated with documents in this way are treated as meta-data .\nIn effect , they are added to the document content for indexing and ranking purposes .\nOf particular interest to us is the work of Joachims et al. [ 9 ] and Granka et al. [ 7 ] .\nThey conducted eye-tracking studies and analyzed log data to determine the extent to which clickthrough data may be treated as implicit relevance judgments .\nThey identified a `` trust bias '' , which leads users to prefer the higher ranking result when all other factors are equal .\nIn addition , they explored techniques that treat clicks as pairwise preferences .\nFor example , a click at position N + 1 -- after skipping the result at position N -- may be viewed as a preference for the result at position N +1 relative to the result at position N .\nThese findings form the basis of the clickthrough inversion methodology we use to interpret user interactions with search results .\nOur examination of large search logs compliments their detailed analysis of a smaller number of participants .\n3 .\nCLICKTHROUGH INVERSIONS\nWhile other researchers have evaluated the display of Web search results through user studies -- presenting users with a small number of different techniques and asking them to complete experimental tasks -- we approach the problem\nSIGIR 2007 Proceedings Session 6 : Summaries\nby extracting implicit feedback from search engine logs .\nExamining user behavior in situ allows us to consider many more queries and caption characteristics , with the volume of available data compensating for the lack of a controlled lab environment .\nThe problem remains of interpreting the information in these logs as implicit indicators of user preferences , and in this matter we are guided by the work of Joachims et al. [ 9 ] .\nWe consider caption pairs , which appear adjacent to one another in the result list .\nOur primary tool for examining the influence of caption features is a type of pattern observed with respect to these caption pairs , which we call a clickthrough inversion .\nA clickthrough inversion occurs at position N when the result at position N receives fewer clicks than the result at position N + 1 .\nFollowing Joachims et al. [ 9 ] , we interpret a clickthrough inversion as indicating a preference for the lower ranking result , overcoming any trust bias .\nFor simplicity , in the remainder of this paper we refer to the higher ranking caption in a pair as `` caption A '' and the lower ranking caption as `` caption B '' .\n3.1 Extracting clickthroughs\nFor the experiments reported in this paper , we sampled a subset of the queries and clickthroughs from the logs of the Windows Live search engine over a period of 3-4 days on three separate occasions : once for results reported in section 3.3 , once for a pilot of our main experiment , and once for the experiment itself ( sections 4 and 5 ) .\nFor simplicity we restricted our sample to queries submitted to the US English interface and ignored any queries containing complex or non-alphanumeric terms ( e.g. operators and phrases ) .\nAt the end of each sampling period , we downloaded captions for the queries associated with the clickthrough sample .\nWhen identifying clickthroughs in search engine logs , we consider only the first clickthrough action taken by a user after entering a query and viewing the result page .\nUsers are identified by IP address , which is a reasonably reliable method of eliminating multiple results from a single user , at the cost of falsely eliminating results from multiple users sharing the same address .\nBy focusing on the initial clickthrough , we hope to capture a user 's impression of the relative relevance within a caption pair when first encountered .\nIf the user later clicks on other results or re-issues the same query , we ignore these actions .\nAny preference captured by a clickthrough inversion is therefore a preference among a group of users issuing a particular query , rather than a preference on the part of a single user .\nIn the remainder of the paper , we use the term `` clickthrough '' to refer only to this initial action .\nGiven the dynamic nature of the Web and the volumes of data involved , search engine logs are bound to contain considerable `` noise '' .\nFor example , even over a period of hours or minutes the order of results for a given query can change , with some results dropping out of the top ten and new ones appearing .\nFor this reason , we retained clickthroughs for a specific combination of a query and a result only if this result appears in a consistent position for at least 50 % of the clickthroughs .\nClickthroughs for the same result when it appeared at other positions were discarded .\nFor similar reasons , if we did not detect at least ten clickthroughs for a particular query during the sampling period , no clickthroughs for that query were retained .\nFigure 2 : Clickthrough curves for three queries : a ) a stereotypical navigational query , b ) a stereotypical informational query , and c ) a query exhibiting clickthrough inversions .\nThe outcome at the end of each sampling period is a set of records , with each record describing the clickthroughs for a given query/result combination .\nEach record includes a query , a result position , a title , a snippet , a URL , the number of clickthroughs for this result , and the total number of clickthroughs for this query .\nWe then processed this set to generate clickthrough curves and identify inversions .\n3.2 Clickthrough curves\nIt could be argued that under ideal circumstances , clickthrough inversions would not be present in search engine logs .\nA hypothetical `` perfect '' search engine would respond to a query by placing the result most likely to be relevant first in the result list .\nEach caption would appropriately summarize the content of the linked page and its relationship to the query , allowing users to make accurate judgments .\nLater results would complement earlier ones , linking to novel or supplementary material , and ordered by their interest to the greatest number of users .\nFigure 2 provides clickthrough curves for three example queries .\nFor each example , we plot the percentage of clickthroughs against position for the top ten results .\nThe first query ( craigslist ) is stereotypically navigational , showing a spike at the `` correct '' answer ( www.craigslist.org ) .\nThe second query is informational in the sense of Lee et al. [ 10 ] ( periodic table of elements ) .\nIts curve is flatter and less skewed toward a single result .\nFor both queries , the number of clickthroughs is consistent with the result positions , with the percentage of clickthroughs decreasing monotonically as position increases , the ideal behavior .\nRegrettably , no search engine is perfect , and clickthrough inversions are seen for many queries .\nFor example , for the third query ( kids online games ) the clickthrough curve exhibits a number of clickthrough inversions , with an apparent preference for the result at position 4 .\nSeveral causes may be enlisted to explain the presence of an inversion in a clickthrough curve .\nThe search engine may have failed in its primary goal , ranking more relevant results below less relevant results .\nEven when the relative ranking is appropriate , a caption may fail to reflect the content of the underlying page with respect to the query , leading the user to make an incorrect judgment .\nBefore turning to the second case , we address the first , and examine the extent to which relevance alone may explain these inversions .\n3.3 Relevance\nThe simplest explanation for the presence of a clickthrough inversion is a relevance difference between the higher ranking member of caption pair and the lower ranking member .\nIn order to examine the extent to which relevance plays a role in clickthrough inversions , we conducted an initial experiment using a set of 1,811 queries with associated judgments created as part of on-going work .\nOver a four-day period , we sampled the search engine logs and extracted over one hundred thousand clicks involving these queries .\nFrom these clicks we identified 355 clickthrough inversions , satisfying the criteria of section 3.1 , where relevance judgments existed for both pages .\nThe relevance judgments were made by independent assessors viewing the pages themselves , rather than the captions .\nRelevance was assessed on a 6-point scale .\nThe outcome is presented in figure 3 , which shows the explicit judgments for the 355 clickthrough inversions .\nIn all of these cases , there were more clicks on the lower ranked member of the\nFigure 3 : Relevance relationships at clickthrough inversions .\nCompares relevance between the higher ranking member of a caption pair ( rel ( A ) ) to the relevance of the lower ranking member ( rel ( B ) ) , where caption A received fewer clicks than caption B.\npair ( B ) .\nThe figure shows the corresponding relevance judgments .\nFor example , the first row rel ( A ) < rel ( B ) , indicates that the higher ranking member of pair ( A ) was rated as less relevant than the lower ranking member of the pair ( B ) .\nAs we see in the figure , relevance alone appears inadequate to explain the majority of clickthrough inversions .\nFor twothirds of the inversions ( 236 ) , the page associated with caption A is at least as relevant as the page associated with caption B. For 28.7 % of the inversions , A has greater relevance than B , which received the greater number of clickthroughs .\n4 .\nINFLUENCE OF CAPTION FEATURES\nHaving demonstrated that clickthrough inversions can not always be explained by relevance differences , we explore what features of caption pairs , if any , lead users to prefer one caption over another .\nFor example , we may hypothesize that the absence of a snippet in caption A and the presence of a snippet in caption B ( e.g. captions 2 and 3 in figure 1 ) leads users to prefer caption A. Nonetheless , due to competing factors , a large set of clickthrough inversions may also include pairs where the snippet is missing in caption B and not in caption A. However , if we compare a large set of clickthrough inversions to a similar set of pairs for which the clickthroughs are consistent with their ranking , we would expect to see relatively more pairs where the snippet was missing in caption A.\n4.1 Evaluation methodology\nFollowing this line of reasoning , we extracted two sets of caption pairs from search logs over a three day period .\nThe first is a set of nearly five thousand clickthrough inversions , extracted according to the procedure described in section 3.1 .\nThe second is a corresponding set of caption pairs that do not exhibit clickthrough inversions .\nIn other words , for pairs in this set , the result at the higher rank ( caption A ) received more clickthroughs than the result at the lower rank ( caption B ) .\nTo the greatest extent possible , each pair in the second set was selected to correspond to a pair in the first set , in terms of result position and number of clicks on each result .\nWe refer to the first set , containing clickthrough inversions , as the INV set ; we refer to the second set , containing caption pairs for which the clickthroughs are consistent with their rank order , as the CON set .\nWe extract a number of features characterizing snippets ( described in detail in the next section ) and compare the presence of each feature in the INV and CON sets .\nWe describe the features as a hypothesized preference ( e.g. , a preference for captions containing a snippet ) .\nThus , in either set , a given feature may be present in one of two forms : favoring the higher ranked caption ( caption A ) or favoring the lower ranked caption ( caption B ) .\nFor example , the ab\ntitle of caption A contains matches to fewer query terms than the title of caption B title + snippet of caption A contains matches to fewer query terms than the title + snippet of caption B title + snippet + URL of caption A contains matches to fewer query terms than caption B title of caption B ( but not A ) starts with a phrase match to the query title + snippet + url contains the query as a phrase match caption B contains one match to each term ; caption A contains more matches with missing terms\ntitle or snippet of caption B ( but not A ) contains the term `` official '' ( with stemming ) title or snippet of caption B ( but not A ) contains the phrase `` home page '' title or snippet of caption B ( but not A ) contains a term suggesting the presence of an image gallery caption B ( but not A ) passes a simple readability test\nFigure 4 : Features measured in caption pairs ( caption A and caption B ) , with caption A as the higher ranked result .\nThese features are expressed from the perspective of the prevalent relationship predicted for clickthrough inversions .\nsence of a snippet in caption A favors caption B , and the absence of a snippet in caption B favors caption A .\nWhen the feature favors caption B ( consistent with a clickthrough inversion ) we refer to the caption pair as a `` positive '' pair .\nWhen the feature favors caption A , we refer to it as a `` negative '' pair .\nFor missing snippets , a positive pair has the caption missing in caption A ( but not B ) and a negative pair has the caption missing in B ( but not A ) .\nThus , for a specific feature , we can construct four subsets : 1 ) INV + , the set of positive pairs from INV ; 2 ) INV -- , the set of negative pairs from INV ; 3 ) CON + ; the set of positive pairs from CON ; and 4 ) CON -- the set of negative pairs from CON .\nThe sets INV + , INV -- , CON + , and CON -- will contain different subsets of INV and CON for each feature .\nWhen stating a feature corresponding to a hypothesized user preference , we follow the practice of stating the feature with the expectation that the size of INV + relative to the size of INV -- should be greater than the size of CON + relative to the size of CON -- .\nFor example , we state the missing snippet feature as `` snippet missing in caption A and present in caption B '' .\nThis evaluation methodology allows us to construct a contingency table for each feature , with INV essentially forming the experimental group and CON the control group .\nWe can then apply Pearson 's chi-square test for significance .\n4.2 Features\nFigure 4 lists the features tested .\nMany of the features on this list correspond to our own assumptions regarding the importance of certain caption characteristics : the presence of query terms , the inclusion of a snippet , and the importance of query term matches in the title .\nOther features suggested themselves during the examination of the snippets collected as part of the study described in section 3.3 and during a pilot of the evaluation methodology ( section 4.1 ) .\nFor this pilot we collected INV and CON sets of similar sizes , and used these sets to evaluate a preliminary list of features and to establish appropriate parameters for the SnippetShort and Readable features .\nIn the pilot , all of the features list in figure 4 were significant at the 95 % level .\nA small number of other features were dropped after the pilot .\nThese features all capture simple aspects of the captions .\nThe first feature concerns the existence of a snippet and the second concerns the relative size of snippets .\nApart from this first feature , we ignore pairs where one caption has a missing snippet .\nThese pairs are not included in the sets constructed for the remaining features , since captions with missing snippets do not contain all the elements of a standard caption and we wanted to avoid their influence .\nThe next six features concern the location and number of matching query terms .\nFor the first five , a match for each query term is counted only once , additional matches for the same term are ignored .\nThe MatchAll feature tests the idea that matching all the query terms exactly once is preferable to matching a subset of the terms many times with a least one query term unmatched .\nThe next three features concern the URLs , capturing aspects of their length and complexity , and the last four features concern caption content .\nThe first two of these content features ( Official and Home ) suggest claims about the importance or significance of the associated page .\nThe third content feature ( Image ) suggests the presence of an image gallery , a popular genre of Web page .\nTerms represented by this feature include `` pictures '' , `` pics '' , and `` gallery '' .\nThe last content feature ( Readable ) applies an ad hoc readability metric to each snippet .\nRegular users of Web search engines may notice occasional snippets that consist of little more than lists of words and phrases , rather than a coherent description .\nWe define our own metric , since the Flesch-Kincaid readability score and similar measures are intended for entire documents not text fragments .\nWhile the metric has not been experimentally validated , it does reflect our intuitions and observations regarding result snippets .\nIn English , the 100 most frequent words represent about 48 % of text , and we would expect readable prose , as opposed to a disjointed list of words , to contain these words in roughly this proportion .\nThe Readable feature computes the percentage of these top-100 words appearing in each caption .\nIf these words represent more than 40 % of one caption and less than 10 % of the other , the pair is included in the appropriate set .\nFigure 5 : Results corresponding to the features listed in figure 4 with \u03c72 and p-values ( df = 1 ) .\nFeatures supported at the 95 % confidence level are bolded .\nThe p-value for the MatchAll feature is computed using Fisher 's Exact Test .\n4.3 Results\nFigure 5 presents the results .\nEach row lists the size of the four sets ( INV + , INV \u2212 , CON + , and CON \u2212 ) for a given feature and indicates the percentage of positive pairs ( % + ) for INV and CON .\nIn order to reject the null hypothesis , this percentage should be significantly greater for INV than CON .\nExcept in one case , we applied the chi-squared test of independence to these sizes , with p-values shown in the last column .\nFor the MatchAll feature , where the sum of the set sizes is 15 , we applied Fisher 's exact test .\nFeatures supported at the 95 % confidence level are bolded .\n5 .\nCOMMENTARY\nThe results support claims that missing snippets , short snippets , missing query terms and complex URLs negatively impact clickthroughs .\nWhile this outcome may not be surprising , we are aware of no other work that can provide support for claims of this type in the context of a commercial Web search engine .\nThis work was originally motivated by our desire to validate some simple guidelines for the generation of captions -- summarizing opinions that we formulated while working on related issues .\nWhile our results do not direct address all of the many variables that influence users understanding of captions , they are consistent with the major guidelines .\nFurther work is needed to provide additional support for the guidelines and to understand the relationships among variables .\nThe first of these guidelines underscores the importance of displaying query terms in context : Whenever possible all of the query terms should appear in the caption , reflecting their relationship to the associated page .\nIf a query term is missing from a caption , the user may have no idea why the result was returned .\nThe results for the MatchAll feature directly support this guideline .\nThe results for TermMatchTitle and TermMatchTSU confirm that matching more terms is desirable .\nOther features provide additional indirect support for this guideline , and none of the results are inconsistent with it .\nA second guideline speaks to the desirability of presenting the user with a readable snippet : When query terms are present in the title , they need not be repeated in the snippet .\nIn particular , when a high-quality query-independent summary is available from an external source , such as a Web directory , it may be more appropriate to display this summary than a lower-quality query-dependent fragment selected on-the-fly .\nWhen titles are available from multiple sources -- the header , the body , Web directories -- a caption generation algorithm might a select a combination of title , snippet and URL that includes as many of the query terms as possible .\nWhen a title containing all query terms can be found , the algorithm might select a query-independent snippet .\nThe MatchAll and Readable features directly support this guideline .\nOnce again , other features provide indirect support , and none of the results are inconsistent with it .\nFinally , the length and complexity of a URL influences user behavior .\nWhen query terms appear in the URL they should highlighted or otherwise distinguished .\nWhen multiple URLs reference the same page ( due to re-directions , etc. ) the shortest URL should be preferred , provided that all query terms will still appear in the caption .\nIn other words , URLs should be selected and displayed in a manner that emphasizes their relationship to the query .\nThe three URL features , as well as TermMatchTSU , directly support this guideline .\nThe influence of the Official and Image features led us to wonder what other terms are prevalent in the captions of clickthrough inversions .\nAs an additional experiment , we treated each of the terms appearing in the INV and CON sets as a separate feature ( case normalized ) , ranking them by their \u03c72 values .\nThe results are presented in figure 6 .\nSince we use the \u03c72 statistic as a divergence measure , rather than a significance test , no p-values are given .\nThe final column of the table indicates the direction of the influence , whether the presence of the terms positively or negatively influence clickthroughs .\nThe positive influence of `` official '' has already been observed ( the difference in the \u03c72 value from that of figure 5 is due to stemming ) .\nNone of the terms included in the Image\nFigure 6 : Words exhibiting the greatest positive ( T ) and negative ( 1 ) influence on clickthrough patterns .\nfeature appear in the top ten , but `` pictures '' and `` photos '' appear at positions 21 and 22 .\nThe high rank given to `` and '' may be related to readability ( the term `` the '' appears in position 20 ) .\nMost surprising to us is the negative influence of the terms : `` encyclopedia '' , `` wikipedia '' , `` free '' , and `` medlineplus '' .\nThe first three terms appear in the title of Wikipedia articles3 and the last appears in the title of MedlinePlus articles4 .\nThese individual word-level features provide hints about issues .\nMore detailed analyses and further experiments will be required to understand these features .\n6 .\nCONCLUSIONS\nClickthrough inversions form an appropriate tool for assessing the influence of caption features .\nUsing clickthrough inversions , we have demonstrated that relatively simple caption features can significantly influence user behavior .\nTo our knowledge , this is first methodology validated for assessing the quality of Web captions through implicit feedback .\nIn the future , we hope to substantially expand this work , considering more features over larger datasets .\nWe also hope to directly address the goal of predicting relevance from clickthoughs and other information present in search engine logs ."} {"id": "H-9", "title": "", "abstract": "", "keyphrases": ["retriev model", "rank function", "ambigu", "cluster view", "meaning cluster label", "histori collect", "past queri", "clickthrough", "star cluster algorithm", "suffix tree cluster algorithm", "search result snippet", "monothet cluster algorithm", "pseudo-document", "pairwis similar graph", "similar threshold paramet", "centroid-base method", "cosin similar", "centroid prototyp", "reciproc rank", "log-base method", "mean averag precis", "search result organ", "search engin log", "interest aspect"], "prmu": [], "lvl-1": "Learn from Web Search Logs to Organize Search Results Xuanhui Wang Department of Computer Science University of Illinois at Urbana-Champaign Urbana, IL 61801 xwang20@cs.uiuc.edu ChengXiang Zhai Department of Computer Science University of Illinois at Urbana-Champaign Urbana, IL 61801 czhai@cs.uiuc.edu ABSTRACT Effective organization of search results is critical for improving the utility of any search engine.\nClustering search results is an effective way to organize search results, which allows a user to navigate into relevant documents quickly.\nHowever, two deficiencies of this approach make it not always work well: (1) the clusters discovered do not necessarily correspond to the interesting aspects of a topic from the user``s perspective; and (2) the cluster labels generated are not informative enough to allow a user to identify the right cluster.\nIn this paper, we propose to address these two deficiencies by (1) learning interesting aspects of a topic from Web search logs and organizing search results accordingly; and (2) generating more meaningful cluster labels using past query words entered by users.\nWe evaluate our proposed method on a commercial search engine log data.\nCompared with the traditional methods of clustering search results, our method can give better result organization and more meaningful labels.\nCategories and Subject Descriptors: H.3.3 [Information Search and Retrieval]: Clustering, Search process General Terms: Algorithm, Experimentation 1.\nINTRODUCTION The utility of a search engine is affected by multiple factors.\nWhile the primary factor is the soundness of the underlying retrieval model and ranking function, how to organize and present search results is also a very important factor that can affect the utility of a search engine significantly.\nCompared with the vast amount of literature on retrieval models, however, there is relatively little research on how to improve the effectiveness of search result organization.\nThe most common strategy of presenting search results is a simple ranked list.\nIntuitively, such a presentation strategy is reasonable for non-ambiguous, homogeneous search results; in general, it would work well when the search results are good and a user can easily find many relevant documents in the top ranked results.\nHowever, when the search results are diverse (e.g., due to ambiguity or multiple aspects of a topic) as is often the case in Web search, the ranked list presentation would not be effective; in such a case, it would be better to group the search results into clusters so that a user can easily navigate into a particular interesting group.\nFor example, the results in the first page returned from Google for the ambiguous query jaguar (as of Dec. 2nd, 2006) contain at least four different senses of jaguar (i.e., car, animal, software, and a sports team); even for a more refined query such as jaguar team picture, the results are still quite ambiguous, including at least four different jaguar teams - a wrestling team, a jaguar car team, Southwestern College Jaguar softball team, and Jacksonville Jaguar football team.\nMoreover, if a user wants to find a place to download a jaguar software, a query such as download jaguar is also not very effective as the dominating results are about downloading jaguar brochure, jaguar wallpaper, and jaguar DVD.\nIn these examples, a clustering view of the search results would be much more useful to a user than a simple ranked list.\nClustering is also useful when the search results are poor, in which case, a user would otherwise have to go through a long list sequentially to reach the very first relevant document.\nAs a primary alternative strategy for presenting search results, clustering search results has been studied relatively extensively [9, 15, 26, 27, 28].\nThe general idea in virtually all the existing work is to perform clustering on a set of topranked search results to partition the results into natural clusters, which often correspond to different subtopics of the general query topic.\nA label will be generated to indicate what each cluster is about.\nA user can then view the labels to decide which cluster to look into.\nSuch a strategy has been shown to be more useful than the simple ranked list presentation in several studies [8, 9, 26].\nHowever, this clustering strategy has two deficiencies which make it not always work well: First, the clusters discovered in this way do not necessarily correspond to the interesting aspects of a topic from the user``s perspective.\nFor example, users are often interested in finding either phone codes or zip codes when entering the query area codes.\nBut the clusters discovered by the current methods may partition the results into local codes and international codes.\nSuch clusters would not be very useful for users; even the best cluster would still have a low precision.\nSecond, the cluster labels generated are not informative enough to allow a user to identify the right cluster.\nThere are two reasons for this problem: (1) The clusters are not corresponding to a user``s interests, so their labels would not be very meaningful or useful.\n(2) Even if a cluster really corresponds to an interesting aspect of the topic, the label may not be informative because it is usually generated based on the contents in a cluster, and it is possible that the user is not very familiar with some of the terms.\nFor example, the ambiguous query jaguar may mean an animal or a car.\nA cluster may be labeled as panthera onca.\nAlthough this is an accurate label for a cluster with the animal sense of jaguar, if a user is not familiar with the phrase, the label would not be helpful.\nIn this paper, we propose a different strategy for partitioning search results, which addresses these two deficiencies through imposing a user-oriented partitioning of the search results.\nThat is, we try to figure out what aspects of a search topic are likely interesting to a user and organize the results accordingly.\nSpecifically, we propose to do the following: First, we will learn interesting aspects of similar topics from search logs and organize search results based on these interesting aspects.\nFor example, if the current query has occurred many times in the search logs, we can look at what kinds of pages viewed by the users in the results and what kind of words are used together with such a query.\nIn case when the query is ambiguous such as jaguar we can expect to see some clear clusters corresponding different senses of jaguar.\nMore importantly, even if a word is not ambiguous (e.g., car), we may still discover interesting aspects such as car rental and car pricing (which happened to be the two primary aspects discovered in our search log data).\nSuch aspects can be very useful for organizing future search results about car.\nNote that in the case of car, clusters generated using regular clustering may not necessarily reflect such interesting aspects about car from a user``s perspective, even though the generated clusters are coherent and meaningful in other ways.\nSecond, we will generate more meaningful cluster labels using past query words entered by users.\nAssuming that the past search logs can help us learn what specific aspects are interesting to users given the current query topic, we could also expect that those query words entered by users in the past that are associated with the current query can provide meaningful descriptions of the distinct aspects.\nThus they can be better labels than those extracted from the ordinary contents of search results.\nTo implement the ideas presented above, we rely on search engine logs and build a history collection containing the past queries and the associated clickthroughs.\nGiven a new query, we find its related past queries from the history collection and learn aspects through applying the star clustering algorithm [2] to these past queries and clickthroughs.\nWe can then organize the search results into these aspects using categorization techniques and label each aspect by the most representative past query in the query cluster.\nWe evaluate our method for result organization using logs of a commercial search engine.\nWe compare our method with the default search engine ranking and the traditional clustering of search results.\nThe results show that our method is effective for improving search utility and the labels generated using past query words are more readable than those generated using traditional clustering approaches.\nThe rest of the paper is organized as follows.\nWe first review the related work in Section 2.\nIn Section 3, we describe search engine log data and our procedure of building a history collection.\nIn Section 4, we present our approach in details.\nWe describe the data set in Section 5 and the experimental results are discussed in Section 6.\nFinally, we conclude our paper and discuss future work in Section 7.\n2.\nRELATED WORK Our work is closely related to the study of clustering search results.\nIn [9, 15], the authors used Scatter/Gather algorithm to cluster the top documents returned from a traditional information retrieval system.\nTheir results validate the cluster hypothesis [20] that relevant documents tend to form clusters.\nThe system Grouper was described in [26, 27].\nIn these papers, the authors proposed to cluster the results of a real search engine based on the snippets or the contents of returned documents.\nSeveral clustering algorithms are compared and the Suffix Tree Clustering algorithm (STC) was shown to be the most effective one.\nThey also showed that using snippets is as effective as using whole documents.\nHowever, an important challenge of document clustering is to generate meaningful labels for clusters.\nTo overcome this difficulty, in [28], supervised learning algorithms were studied to extract meaningful phrases from the search result snippets and these phrases were then used to group search results.\nIn [13], the authors proposed to use a monothetic clustering algorithm, in which a document is assigned to a cluster based on a single feature, to organize search results, and the single feature is used to label the corresponding cluster.\nClustering search results has also attracted a lot of attention in industry and commercial Web services such as Vivisimo [22].\nHowever, in all these works, the clusters are generated solely based on the search results.\nThus the obtained clusters do not necessarily reflect users'' preferences and the generated labels may not be informative from a user``s viewpoint.\nMethods of organizing search results based on text categorization are studied in [6, 8].\nIn this work, a text classifier is trained using a Web directory and search results are then classified into the predefined categories.\nThe authors designed and studied different category interfaces and they found that category interfaces are more effective than list interfaces.\nHowever predefined categories are often too general to reflect the finer granularity aspects of a query.\nSearch logs have been exploited for several different purposes in the past.\nFor example, clustering search queries to find those Frequent Asked Questions (FAQ) is studied in [24, 4].\nRecently, search logs have been used for suggesting query substitutes [12], personalized search [19], Web site design [3], Latent Semantic Analysis [23], and learning retrieval ranking functions [16, 10, 1].\nIn our work, we explore past query history in order to better organize the search results for future queries.\nWe use the star clustering algorithm [2], which is a graph partition based approach, to learn interesting aspects from search logs given a new query.\nThus past queries are clustered in a query specific manner and this is another difference from previous works such as [24, 4] in which all queries in logs are clustered in an o\ufb04ine batch manner.\n3.\nSEARCH ENGINE LOGS Search engine logs record the activities of Web users, which reflect the actual users'' needs or interests when conducting ID Query URL Time 1 win zip http://www.winzip.com xxxx 1 win zip http://www.swinzip.com/winzip xxxx 2 time zones http://www.timeanddate.com xxxx ... ... ... ... Table 1: Sample entries of search engine logs.\nDifferent ID``s mean different sessions.\nWeb search.\nThey generally have the following information: text queries that users submitted, the URLs that they clicked after submitting the queries, and the time when they clicked.\nSearch engine logs are separated by sessions.\nA session includes a single query and all the URLs that a user clicked after issuing the query [24].\nA small sample of search log data is shown in Table 1.\nOur idea of using search engine logs is to treat these logs as past history, learn users'' interests using this history data automatically, and represent their interests by representative queries.\nFor example, in the search logs, a lot of queries are related to car and this reflects that a large number of users are interested in information about car.\nDifferent users are probably interested in different aspects of car.\nSome are looking for renting a car, thus may submit a query like car rental; some are more interested in buying a used car, and may submit a query like used car; and others may care more about buying a car accessory, so they may use a query like car audio.\nBy mining all the queries which are related to the concept of car, we can learn the aspects that are likely interesting from a user``s perspective.\nAs an example, the following is some aspects about car learned from our search log data (see Section 5).\n1.\ncar rental, hertz car rental, enterprise car rental, ... 2.\ncar pricing, used car, car values, ... 3.\ncar accidents, car crash, car wrecks, ... 4.\ncar audio, car stereo, car speaker, ... In order to learn aspects from search engine logs, we preprocess the raw logs to build a history data collection.\nAs shown above, search engine logs consist of sessions.\nEach session contains the information of the text query and the clicked Web page URLs, together with the time that the user did the clicks.\nHowever, this information is limited since URLs alone are not informative enough to tell the intended meaning of a submitted query accurately.\nTo gather rich information, we enrich each URL with additional text content.\nSpecifically, given the query in a session, we obtain its top-ranked results using the search engine from which we obtained our log data, and extract the snippets of the URLs that are clicked on according to the log information in the corresponding session.\nAll the titles, snippets, and URLs of the clicked Web pages of that query are used to represent the session.\nDifferent sessions may contain the same queries.\nThus the number of sessions could be quite huge and the information in the sessions with the same queries could be redundant.\nIn order to improve the scalability and reduce data sparseness, we aggregate all the sessions which contain exactly the same queries together.\nThat is, for each unique query, we build a pseudo-document which consists of all the descriptions of its clicks in all the sessions aggregated.\nThe keywords contained in the queries themselves can be regarded as brief summaries of the pseudo-documents.\nAll these pseudo-documents form our history data collection, which is used to learn interesting aspects in the following section.\n4.\nOUR APPROACH Our approach is to organize search results by aspects learned from search engine logs.\nGiven an input query, the general procedure of our approach is: 1.\nGet its related information from search engine logs.\nAll the information forms a working set.\n2.\nLearn aspects from the information in the working set.\nThese aspects correspond to users'' interests given the input query.\nEach aspect is labeled with a representative query.\n3.\nCategorize and organize the search results of the input query according to the aspects learned above.\nWe now give a detailed presentation of each step.\n4.1 Finding Related Past Queries Given a query q, a search engine will return a ranked list of Web pages.\nTo know what the users are really interested in given this query, we first retrieve its past similar queries in our preprocessed history data collection.\nFormally, assume we have N pseudo-documents in our history data set: H = {Q1, Q2, ..., QN }.\nEach Qi corresponds to a unique query and is enriched with clickthrough information as discussed in Section 3.\nTo find q``s related queries in H, a natural way is to use a text retrieval algorithm.\nHere we use the OKAPI method [17], one of the state-of-the-art retrieval methods.\nSpecifically, we use the following formula to calculate the similarity between query q and pseudo-document Qi: w\u2208q \u00a1 Qi c(w, q) \u00d7 IDF(w) \u00d7 (k1 + 1) \u00d7 c(w, Qi) k1((1 \u2212 b) + b |Qi| avdl ) + c(w, Qi) where k1 and b are OKAPI parameters set empirically, c(w, Qi) and c(w, q) are the count of word w in Qi and q respectively, IDF(w) is the inverse document frequency of word w, and avdl is the average document length in our history collection.\nBased on the similarity scores, we rank all the documents in H.\nThe top ranked documents provide us a working set to learn the aspects that users are usually interested in.\nEach document in H corresponds to a past query, and thus the top ranked documents correspond to q``s related past queries.\n4.2 Learning Aspects by Clustering Given a query q, we use Hq = {d1, ..., dn} to represent the top ranked pseudo-documents from the history collection H.\nThese pseudo-documents contain the aspects that users are interested in.\nIn this subsection, we propose to use a clustering method to discover these aspects.\nAny clustering algorithm could be applied here.\nIn this paper, we use an algorithm based on graph partition: the star clustering algorithm [2].\nA good property of the star clustering in our setting is that it can suggest a good label for each cluster naturally.\nWe describe the star clustering algorithm below.\n4.2.1 Star Clustering Given Hq, star clustering starts with constructing a pairwise similarity graph on this collection based on the vector space model in information retrieval [18].\nThen the clusters are formed by dense subgraphs that are star-shaped.\nThese clusters form a cover of the similarity graph.\nFormally, for each of the n pseudo-documents {d1, ..., dn} in the collection Hq, we compute a TF-IDF vector.\nThen, for each pair of documents di and dj (i = j), their similarity is computed as the cosine score of their corresponding vectors vi and vj , that is sim(di, dj ) = cos(vi, vj) = vi \u00b7 vj |vi| \u00b7 |vj | .\nA similarity graph G\u03c3 can then be constructed as follows using a similarity threshold parameter \u03c3.\nEach document di is a vertex of G\u03c3.\nIf sim(di, dj) > \u03c3, there would be an edge connecting the corresponding two vertices.\nAfter the similarity graph G\u03c3 is built, the star clustering algorithm clusters the documents using a greedy algorithm as follows: 1.\nAssociate every vertex in G\u03c3 with a flag, initialized as unmarked.\n2.\nFrom those unmarked vertices, find the one which has the highest degree and let it be u. 3.\nMark the flag of u as center.\n4.\nForm a cluster C containing u and all its neighbors that are not marked as center.\nMark all the selected neighbors as satellites.\n5.\nRepeat from step 2 until all the vertices in G\u03c3 are marked.\nEach cluster is star-shaped, which consists a single center and several satellites.\nThere is only one parameter \u03c3 in the star clustering algorithm.\nA big \u03c3 enforces that the connected documents have high similarities, and thus the clusters tend to be small.\nOn the other hand, a small \u03c3 will make the clusters big and less coherent.\nWe will study the impact of this parameter in our experiments.\nA good feature of the star clustering algorithm is that it outputs a center for each cluster.\nIn the past query collection Hq, each document corresponds to a query.\nThis center query can be regarded as the most representative one for the whole cluster, and thus provides a label for the cluster naturally.\nAll the clusters obtained are related to the input query q from different perspectives, and they represent the possible aspects of interests about query q of users.\n4.3 Categorizing Search Results In order to organize the search results according to users'' interests, we use the learned aspects from the related past queries to categorize the search results.\nGiven the top m Web pages returned by a search engine for q: {s1, ..., sm}, we group them into different aspects using a categorization algorithm.\nIn principle, any categorization algorithm can be used here.\nHere we use a simple centroid-based method for categorization.\nNaturally, more sophisticated methods such as SVM [21] may be expected to achieve even better performance.\nBased on the pseudo-documents in each discovered aspect Ci, we build a centroid prototype pi by taking the average of all the vectors of the documents in Ci: pi = 1 |Ci| l\u2208Ci vl.\nAll these pi``s are used to categorize the search results.\nSpecifically, for any search result sj, we build a TF-IDF vector.\nThe centroid-based method computes the cosine similarity between the vector representation of sj and each centroid prototype pi.\nWe then assign sj to the aspect with which it has the highest cosine similarity score.\nAll the aspects are finally ranked according to the number of search results they have.\nWithin each aspect, the search results are ranked according to their original search engine ranking.\n5.\nDATA COLLECTION We construct our data set based on the MSN search log data set released by the Microsoft Live Labs in 2006 [14].\nIn total, this log data spans 31 days from 05/01/2006 to 05/31/2006.\nThere are 8,144,000 queries, 3,441,000 distinct queries, and 4,649,000 distinct URLs in the raw data.\nTo test our algorithm, we separate the whole data set into two parts according to the time: the first 2/3 data is used to simulate the historical data that a search engine accumulated, and we use the last 1/3 to simulate future queries.\nIn the history collection, we clean the data by only keeping those frequent, well-formatted, English queries (queries which only contain characters `a'', `b'', ..., `z'', and space, and appear more than 5 times).\nAfter cleaning, we get 169,057 unique queries in our history data collection totally.\nOn average, each query has 3.5 distinct clicks.\nWe build the pseudo-documents for all these queries as described in Section 3.\nThe average length of these pseudo-documents is 68 words and the total data size of our history collection is 129MB.\nWe construct our test data from the last 1/3 data.\nAccording to the time, we separate this data into two test sets equally for cross-validation to set parameters.\nFor each test set, we use every session as a test case.\nEach session contains a single query and several clicks.\n(Note that we do not aggregate sessions for test cases.\nDifferent test cases may have the same queries but possibly different clicks.)\nSince it is infeasible to ask the original user who submitted a query to judge the results for the query, we follow the work [11] and opt to use the clicks associated with the query in a session to approximate relevant documents.\nUsing clicks as judgments, we can then compare different algorithms for organizing search results to see how well these algorithms can help users reach the clicked URLs.\nOrganizing search results into different aspects is expected to help informational queries.\nIt thus makes sense to focus on the informational queries in our evaluation.\nFor each test case, i.e., each session, we count the number of different clicks and filter out those test cases with fewer than 4 clicks under the assumption that a query with more clicks is more likely to be an informational query.\nSince we want to test whether our algorithm can learn from the past queries, we also filter out those test cases whose queries can not retrieve at least 100 pseudo-documents from our history collection.\nFinally, we obtain 172 and 177 test cases in the first and second test sets respectively.\nOn average, we have 6.23 and 5.89 clicks for each test case in the two test sets respectively.\n6.\nEXPERIMENTS In the section, we describe our experiments on the search result organization based past search engine logs.\n6.1 Experimental Design We use two baseline methods to evaluate the proposed method for organizing search results.\nFor each test case, the first method is the default ranked list from a search engine (baseline).\nThe second method is to organize the search results by clustering them (cluster-based).\nFor fair comparison, we use the same clustering algorithm as our logbased method (i.e., star clustering).\nThat is, we treat each search result as a document, construct the similarity graph, and find the star-shaped clusters.\nWe compare our method (log-based) with the two baseline methods in the following experiments.\nFor both cluster-based and log-based methods, the search results within each cluster is ranked based on their original ranking given by the search engine.\nTo compare different result organization methods, we adopt a similar method as in the paper [9].\nThat is, we compare the quality (e.g., precision) of the best cluster, which is defined as the one with the largest number of relevant documents.\nOrganizing search results into clusters is to help users navigate into relevant documents quickly.\nThe above metric is to simulate a scenario when users always choose the right cluster and look into it.\nSpecifically, we download and organize the top 100 search results into aspects for each test case.\nWe use Precision at 5 documents (P@5) in the best cluster as the primary measure to compare different methods.\nP@5 is a very meaningful measure as it tells us the perceived precision when the user opens a cluster and looks at the first 5 documents.\nWe also use Mean Reciprocal Rank (MRR) as another metric.\nMRR is calculated as MRR = 1 |T| q\u2208T 1 rq where T is a set of test queries, rq is the rank of the first relevant document for q. To give a fair comparison across different organization algorithms, we force both cluster-based and log-based methods to output the same number of aspects and force each search result to be in one and only one aspect.\nThe number of aspects is fixed at 10 in all the following experiments.\nThe star clustering algorithm can output different number of clusters for different input.\nTo constrain the number of clusters to 10, we order all the clusters by their sizes, select the top 10 as aspect candidates.\nWe then re-assign each search result to one of these selected 10 aspects that has the highest similarity score with the corresponding aspect centroid.\nIn our experiments, we observe that the sizes of the best clusters are all larger than 5, and this ensures that P@5 is a meaningful metric.\n6.2 Experimental Results Our main hypothesis is that organizing search results based on the users'' interests learned from a search log data set is more beneficial than to organize results using a simple list or cluster search results.\nIn the following, we test our hypothesis from two perspectives - organization and labeling.\nMethod Test set 1 Test set 2 MMR P@5 MMR P@5 Baseline 0.7347 0.3325 0.7393 0.3288 Cluster-based 0.7735 0.3162 0.7666 0.2994 Log-based 0.7833 0.3534 0.7697 0.3389 Cluster/Baseline 5.28% -4.87% 3.69% -8.93% Log/Baseline 6.62% 6.31% 4.10% 3.09% Log/Cluster 1.27% 11.76% 0.40% 13.20% Table 2: Comparison of different methods by MMR and P@5.\nWe also show the percentage of relative improvement in the lower part.\nComparison Test set 1 Test set 2 Impr.\n/Decr.\nImpr.\n/Decr.\nCluster/Baseline 53/55 50/64 Log/Baseline 55/44 60/45 Log/Cluster 68/47 69/44 Table 3: Pairwise comparison w.r.t the number of test cases whose P@5``s are improved versus decreased w.r.t the baseline.\n6.2.1 Overall performance We compare three methods, basic search engine ranking (baseline), traditional clustering based method (clusterbased), and our log based method (log-based), in Table 2 using MRR and P@5.\nWe optimize the parameter \u03c3``s for each collection individually based on P@5 values.\nThis shows the best performance that each method can achieve.\nIn this table, we can see that in both test collections, our method is better than both the baseline and the cluster-based methods.\nFor example, in the first test collection, the baseline method of MMR is 0.734, the cluster-based method is 0.773 and our method is 0.783.\nWe achieve higher accuracy than both cluster-based method (1.27% improvement) and the baseline method (6.62% improvement).\nThe P@5 values are 0.332 for the baseline, 0.316 for cluster-based method, but 0.353 for our method.\nOur method improves over the baseline by 6.31%, while the cluster-based method even decreases the accuracy.\nThis is because cluster-based method organizes the search results only based on the contents.\nThus it could organize the results differently from users'' preferences.\nThis confirms our hypothesis of the bias of the cluster-based method.\nComparing our method with the cluster-based method, we achieve significant improvement on both test collections.\nThe p-values of the significance tests based on P@5 on both collections are 0.01 and 0.02 respectively.\nThis shows that our log-based method is effective to learn users'' preferences from the past query history, and thus it can organize the search results in a more useful way to users.\nWe showed the optimal results above.\nTo test the sensitivity of the parameter \u03c3 of our log-based method, we use one of the test sets to tune the parameter to be optimal and then use the tuned parameter on the other set.\nWe compare this result (log tuned outside) with the optimal results of both cluster-based (cluster optimized) and log-based methods (log optimized) in Figure 1.\nWe can see that, as expected, the performance using the parameter tuned on a separate set is worse than the optimal performance.\nHowever, our method still performs much better than the optimal results of cluster-based method on both test collections.\n0.27 0.28 0.29 0.3 0.31 0.32 0.33 0.34 0.35 0.36 Test set 1 Test set 2 P@5 cluster optimized log optimized log tuned outside Figure 1: Results using parameters tuned from the other test collection.\nWe compare it with the optimal performance of the cluster-based and our logbased methods.\n0 10 20 30 40 50 60 1 2 3 4 Bin number #Queries Improved Decreased Figure 2: The correlation between performance change and result diversity.\nIn Table 3, we show pairwise comparisons of the three methods in terms of the numbers of test cases for which P@5 is increased versus decreased.\nWe can see that our method improves more test cases compared with the other two methods.\nIn the next section, we show more detailed analysis to see what types of test cases can be improved by our method.\n6.2.2 Detailed Analysis To better understand the cases where our log-based method can improve the accuracy, we test two properties: result diversity and query difficulty.\nAll the analysis below is based on test set 1.\nDiversity Analysis: Intuitively, organizing search results into different aspects is more beneficial to those queries whose results are more diverse, as for such queries, the results tend to form two or more big clusters.\nIn order to test the hypothesis that log-based method help more those queries with diverse results, we compute the size ratios of the biggest and second biggest clusters in our log-based results and use this ratio as an indicator of diversity.\nIf the ratio is small, it means that the first two clusters have a small difference thus the results are more diverse.\nIn this case, we would expect our method to help more.\nThe results are shown in Figure 2.\nIn this figure, we partition the ratios into 4 bins.\nThe 4 bins correspond to the ratio ranges [1, 2), [2, 3), [3, 4), and [4, +\u221e) respectively.\n([i, j) means that i \u2264 ratio < j.) In each bin, we count the numbers of test cases whose P@5``s are improved versus decreased with respect to the ranking baseline, and plot the numbers in this figure.\nWe can observe that when the ratio is smaller, the log-based method can improve more test cases.\nBut when 0 5 10 15 20 25 30 1 2 3 4 Bin number #Queries Improved Decreased Figure 3: The correlation between performance change and query difficulty.\nthe ratio is large, the log-based method can not improve over the baseline.\nFor example, in bin 1, 48 test cases are improved and 34 are decreased.\nBut in bin 4, all the 4 test cases are decreased.\nThis confirms our hypothesis that our method can help more if the query has more diverse results.\nThis also suggests that we should turn off the option of re-organizing search results if the results are not very diverse (e.g., as indicated by the cluster size ratio).\nDifficulty Analysis: Difficult queries have been studied in recent years [7, 25, 5].\nHere we analyze the effectiveness of our method in helping difficult queries.\nWe quantify the query difficulty by the Mean Average Precision (MAP) of the original search engine ranking for each test case.\nWe then order the 172 test cases in test set 1 in an increasing order of MAP values.\nWe partition the test cases into 4 bins with each having a roughly equal number of test cases.\nA small MAP means that the utility of the original ranking is low.\nBin 1 contains those test cases with the lowest MAP``s and bin 4 contains those test cases with the highest MAP``s. For each bin, we compute the numbers of test cases whose P@5``s are improved versus decreased.\nFigure 3 shows the results.\nClearly, in bin 1, most of the test cases are improved (24 vs 3), while in bin 4, log-based method may decrease the performance (3 vs 20).\nThis shows that our method is more beneficial to difficult queries, which is as expected since clustering search results is intended to help difficult queries.\nThis also shows that our method does not really help easy queries, thus we should turn off our organization option for easy queries.\n6.2.3 Parameter Setting We examine parameter sensitivity in this section.\nFor the star clustering algorithm, we study the similarity threshold parameter \u03c3.\nFor the OKAPI retrieval function, we study the parameters k1 and b.\nWe also study the impact of the number of past queries retrieved in our log-based method.\nFigure 4 shows the impact of the parameter \u03c3 for both cluster-based and log-based methods on both test sets.\nWe vary \u03c3 from 0.05 to 0.3 with step 0.05.\nFigure 4 shows that the performance is not very sensitive to the parameter \u03c3.\nWe can always obtain the best result in range 0.1 \u2264 \u03c3 \u2264 0.25.\nIn Table 4, we show the impact of OKAPI parameters.\nWe vary k1 from 1.0 to 2.0 with step 0.2 and b from 0 to 1 with step 0.2.\nFrom this table, it is clear that P@5 is also not very sensitive to the parameter setting.\nMost of the values are larger than 0.35.\nThe default values k1 = 1.2 and b = 0.8 give approximately optimal results.\nWe further study the impact of the amount of history 0.2 0.25 0.3 0.35 0.4 0.05 0.1 0.15 0.2 0.25 0.3 P@5 similarity threhold: sigma cluster-based 1 log-based 1 cluster-based 2 log-based 2 Figure 4: The impact of similarity threshold \u03c3 on both cluster-based and log-based methods.\nWe show the result on both test collections.\nb 0.0 0.2 0.4 0.6 0.8 1.0 1.0 0.3476 0.3406 0.3453 0.3616 0.3500 0.3453 1.2 0.3418 0.3383 0.3453 0.3593 0.3534 0.3546 k1 1.4 0.3337 0.3430 0.3476 0.3604 0.3546 0.3465 1.6 0.3476 0.3418 0.3523 0.3534 0.3581 0.3476 1.8 0.3465 0.3418 0.3546 0.3558 0.3616 0.3476 2.0 0.3453 0.3500 0.3534 0.3558 0.3569 0.3546 Table 4: Impact of OKAPI parameters k1 and b. information to learn from by varying the number of past queries to be retrieved for learning aspects.\nThe results on both test collections are shown in Figure 5.\nWe can see that the performance gradually increases as we enlarge the number of past queries retrieved.\nThus our method could potentially learn more as we accumulate more history.\nMore importantly, as time goes, more and more queries will have sufficient history, so we can improve more and more queries.\n6.2.4 An Illustrative Example We use the query area codes to show the difference in the results of the log-based method and the cluster-based method.\nThis query may mean phone codes or zip codes.\nTable 5 shows the representative keywords extracted from the three biggest clusters of both methods.\nIn the clusterbased method, the results are partitioned based on locations: local or international.\nIn the log-based method, the results are disambiguated into two senses: phone codes or zip codes.\nWhile both are reasonable partitions, our evaluation indicates that most users using such a query are often interested in either phone codes or zip codes.\nsince the P@5 values of cluster-based and log-based methods are 0.2 and 0.6, respectively.\nTherefore our log-based method is more effective in helping users to navigate into their desired results.\nCluster-based method Log-based method city, state telephone, city, international local, area phone, dialing international zip, postal Table 5: An example showing the difference between the cluster-based method and our log-based method 0.16 0.18 0.2 0.22 0.24 0.26 0.28 0.3 1501201008050403020 P@5 #queries retrieved Test set 1 Test set 2 Figure 5: The impact of the number of past queries retrieved.\n6.2.5 Labeling Comparison We now compare the labels between the cluster-based method and log-based method.\nThe cluster-based method has to rely on the keywords extracted from the snippets to construct the label for each cluster.\nOur log-based method can avoid this difficulty by taking advantage of queries.\nSpecifically, for the cluster-based method, we count the frequency of a keyword appearing in a cluster and use the most frequent keywords as the cluster label.\nFor log-based method, we use the center of each star cluster as the label for the corresponding cluster.\nIn general, it is not easy to quantify the readability of a cluster label automatically.\nWe use examples to show the difference between the cluster-based and the log-based methods.\nIn Table 6, we list the labels of the top 5 clusters for two examples jaguar and apple.\nFor the cluster-based method, we separate keywords by commas since they do not form a phrase.\nFrom this table, we can see that our log-based method gives more readable labels because it generates labels based on users'' queries.\nThis is another advantage of our way of organizing search results over the clustering approach.\nLabel comparison for query jaguar Log-based method Cluster-based method 1.\njaguar animal 1.\njaguar, auto, accessories 2.\njaguar auto accessories 2.\njaguar, type, prices 3.\njaguar cats 3.\njaguar, panthera, cats 4.\njaguar repair 4.\njaguar, services, boston 5.\njaguar animal pictures 5.\njaguar, collection, apparel Label comparison for query apple Log-based method Cluster-based method 1.\napple computer 1.\napple, support, product 2.\napple ipod 2.\napple, site, computer 3.\napple crisp recipe 3.\napple, world, visit 4.\nfresh apple cake 4.\napple, ipod, amazon 5.\napple laptop 5.\napple, products, news Table 6: Cluster label comparison.\n7.\nCONCLUSIONS AND FUTURE WORK In this paper, we studied the problem of organizing search results in a user-oriented manner.\nTo attain this goal, we rely on search engine logs to learn interesting aspects from users'' perspective.\nGiven a query, we retrieve its related queries from past query history, learn the aspects by clustering the past queries and the associated clickthrough information, and categorize the search results into the aspects learned.\nWe compared our log-based method with the traditional cluster-based method and the baseline of search engine ranking.\nThe experiments show that our log-based method can consistently outperform cluster-based method and improve over the ranking baseline, especially when the queries are difficult or the search results are diverse.\nFurthermore, our log-based method can generate more meaningful aspect labels than the cluster labels generated based on search results when we cluster search results.\nThere are several interesting directions for further extending our work: First, although our experiment results have clearly shown promise of the idea of learning from search logs to organize search results, the methods we have experimented with are relatively simple.\nIt would be interesting to explore other potentially more effective methods.\nIn particular, we hope to develop probabilistic models for learning aspects and organizing results simultaneously.\nSecond, with the proposed way of organizing search results, we can expect to obtain informative feedback information from a user (e.g., the aspect chosen by a user to view).\nIt would thus be interesting to study how to further improve the organization of the results based on such feedback information.\nFinally, we can combine a general search log with any personal search log to customize and optimize the organization of search results for each individual user.\n8.\nACKNOWLEDGMENTS We thank the anonymous reviewers for their valuable comments.\nThis work is in part supported by a Microsoft Live Labs Research Grant, a Google Research Grant, and an NSF CAREER grant IIS-0347933.\n9.\nREFERENCES [1] E. Agichtein, E. Brill, and S. T. Dumais.\nImproving web search ranking by incorporating user behavior information.\nIn SIGIR, pages 19-26, 2006.\n[2] J. A. Aslam, E. Pelekov, and D. Rus.\nThe star clustering algorithm for static and dynamic information organization.\nJournal of Graph Algorithms and Applications, 8(1):95-129, 2004.\n[3] R. A. Baeza-Yates.\nApplications of web query mining.\nIn ECIR, pages 7-22, 2005.\n[4] D. Beeferman and A. L. Berger.\nAgglomerative clustering of a search engine query log.\nIn KDD, pages 407-416, 2000.\n[5] D. Carmel, E. Yom-Tov, A. Darlow, and D. Pelleg.\nWhat makes a query difficult?\nIn SIGIR, pages 390-397, 2006.\n[6] H. Chen and S. T. Dumais.\nBringing order to the web: automatically categorizing search results.\nIn CHI, pages 145-152, 2000.\n[7] S. Cronen-Townsend, Y. Zhou, and W. B. Croft.\nPredicting query performance.\nIn Proceedings of ACM SIGIR 2002, pages 299-306, 2002.\n[8] S. T. Dumais, E. Cutrell, and H. Chen.\nOptimizing search by showing results in context.\nIn CHI, pages 277-284, 2001.\n[9] M. A. Hearst and J. O. Pedersen.\nReexamining the cluster hypothesis: Scatter/gather on retrieval results.\nIn SIGIR, pages 76-84, 1996.\n[10] T. Joachims.\nOptimizing search engines using clickthrough data.\nIn KDD, pages 133-142, 2002.\n[11] T. Joachims.\nEvaluating Retrieval Performance Using Clickthrough Data., pages 79-96.\nPhysica/Springer Verlag, 2003.\nin J. Franke and G. Nakhaeizadeh and I. Renz, Text Mining.\n[12] R. Jones, B. Rey, O. Madani, and W. Greiner.\nGenerating query substitutions.\nIn WWW, pages 387-396, 2006.\n[13] K. Kummamuru, R. Lotlikar, S. Roy, K. Singal, and R. Krishnapuram.\nA hierarchical monothetic document clustering algorithm for summarization and browsing search results.\nIn WWW, pages 658-665, 2004.\n[14] Microsoft Live Labs.\nAccelerating search in academic research, 2006.\nhttp://research.microsoft.com/ur/us/fundingopps/RFPs/ Search 2006 RFP.aspx.\n[15] P. Pirolli, P. K. Schank, M. A. Hearst, and C. Diehl.\nScatter/gather browsing communicates the topic structure of a very large text collection.\nIn CHI, pages 213-220, 1996.\n[16] F. Radlinski and T. Joachims.\nQuery chains: learning to rank from implicit feedback.\nIn KDD, pages 239-248, 2005.\n[17] S. E. Robertson and S. Walker.\nSome simple effective approximations to the 2-poisson model for probabilistic weighted retrieval.\nIn SIGIR, pages 232-241, 1994.\n[18] G. Salton, A. Wong, and C. S. Yang.\nA vector space model for automatic indexing.\nCommun.\nACM, 18(11):613-620, 1975.\n[19] X. Shen, B. Tan, and C. Zhai.\nContext-sensitive information retrieval using implicit feedback.\nIn SIGIR, pages 43-50, 2005.\n[20] C. J. van Rijsbergen.\nInformation Retrieval, second edition.\nButterworths, London, 1979.\n[21] V. N. Vapnik.\nThe Nature of Statistical Learning Theory.\nSpringer-Verlag, Berlin, 1995.\n[22] Vivisimo.\nhttp://vivisimo.com/.\n[23] X. Wang, J.-T.\nSun, Z. Chen, and C. Zhai.\nLatent semantic analysis for multiple-type interrelated data objects.\nIn SIGIR, pages 236-243, 2006.\n[24] J.-R.\nWen, J.-Y.\nNie, and H. Zhang.\nClustering user queries of a search engine.\nIn WWW, pages 162-168, 2001.\n[25] E. Yom-Tov, S. Fine, D. Carmel, and A. Darlow.\nLearning to estimate query difficulty: including applications to missing content detection and distributed information retrieval.\nIn SIGIR, pages 512-519, 2005.\n[26] O. Zamir and O. Etzioni.\nWeb document clustering: A feasibility demonstration.\nIn SIGIR, pages 46-54, 1998.\n[27] O. Zamir and O. Etzioni.\nGrouper: A dynamic clustering interface to web search results.\nComputer Networks, 31(11-16):1361-1374, 1999.\n[28] H.-J.\nZeng, Q.-C.\nHe, Z. Chen, W.-Y.\nMa, and J. Ma.\nLearning to cluster web search results.\nIn SIGIR, pages 210-217, 2004.", "lvl-3": "Learn from Web Search Logs to Organize Search Results\nABSTRACT\nEffective organization of search results is critical for improving the utility of any search engine .\nClustering search results is an effective way to organize search results , which allows a user to navigate into relevant documents quickly .\nHowever , two deficiencies of this approach make it not always work well : ( 1 ) the clusters discovered do not necessarily correspond to the interesting aspects of a topic from the user 's perspective ; and ( 2 ) the cluster labels generated are not informative enough to allow a user to identify the right cluster .\nIn this paper , we propose to address these two deficiencies by ( 1 ) learning `` interesting aspects '' of a topic from Web search logs and organizing search results accordingly ; and ( 2 ) generating more meaningful cluster labels using past query words entered by users .\nWe evaluate our proposed method on a commercial search engine log data .\nCompared with the traditional methods of clustering search results , our method can give better result organization and more meaningful labels .\n1 .\nINTRODUCTION\nThe utility of a search engine is affected by multiple factors .\nWhile the primary factor is the soundness of the underlying retrieval model and ranking function , how to organize and present search results is also a very important factor that can affect the utility of a search engine significantly .\nCompared with the vast amount of literature on retrieval models , however , there is relatively little research on how to improve the effectiveness of search result organization .\nThe most common strategy of presenting search results is a simple ranked list .\nIntuitively , such a presentation strategy is reasonable for non-ambiguous , homogeneous search\nresults ; in general , it would work well when the search results are good and a user can easily find many relevant documents in the top ranked results .\nHowever , when the search results are diverse ( e.g. , due to ambiguity or multiple aspects of a topic ) as is often the case in Web search , the ranked list presentation would not be effective ; in such a case , it would be better to group the search results into clusters so that a user can easily navigate into a particular interesting group .\nFor example , the results in the first page returned from Google for the ambiguous query `` jaguar '' ( as of Dec. 2nd , 2006 ) contain at least four different senses of `` jaguar '' ( i.e. , car , animal , software , and a sports team ) ; even for a more refined query such as `` jaguar team picture '' , the results are still quite ambiguous , including at least four different jaguar teams -- a wrestling team , a jaguar car team , Southwestern College Jaguar softball team , and Jacksonville Jaguar football team .\nMoreover , if a user wants to find a place to download a jaguar software , a query such as `` download jaguar '' is also not very effective as the dominating results are about downloading jaguar brochure , jaguar wallpaper , and jaguar DVD .\nIn these examples , a clustering view of the search results would be much more useful to a user than a simple ranked list .\nClustering is also useful when the search results are poor , in which case , a user would otherwise have to go through a long list sequentially to reach the very first relevant document .\nAs a primary alternative strategy for presenting search results , clustering search results has been studied relatively extensively [ 9 , 15 , 26 , 27 , 28 ] .\nThe general idea in virtually all the existing work is to perform clustering on a set of topranked search results to partition the results into natural clusters , which often correspond to different subtopics of the general query topic .\nA label will be generated to indicate what each cluster is about .\nA user can then view the labels to decide which cluster to look into .\nSuch a strategy has been shown to be more useful than the simple ranked list presentation in several studies [ 8 , 9 , 26 ] .\nHowever , this clustering strategy has two deficiencies which make it not always work well : First , the clusters discovered in this way do not necessarily correspond to the interesting aspects of a topic from the user 's perspective .\nFor example , users are often interested in finding either `` phone codes '' or `` zip codes '' when entering the query `` area codes . ''\nBut the clusters discovered by the current methods may partition the results into `` local codes '' and `` international codes . ''\nSuch clusters would not be very useful for users ; even the best cluster would still have a low precision .\nSecond , the cluster labels generated are not informative enough to allow a user to identify the right cluster .\nThere are two reasons for this problem : ( 1 ) The clusters are not corresponding to a user 's interests , so their labels would not be very meaningful or useful .\n( 2 ) Even if a cluster really corresponds to an interesting aspect of the topic , the label may not be informative because it is usually generated based on the contents in a cluster , and it is possible that the user is not very familiar with some of the terms .\nFor example , the ambiguous query `` jaguar '' may mean an animal or a car .\nA cluster may be labeled as `` panthera onca . ''\nAlthough this is an accurate label for a cluster with the `` animal '' sense of `` jaguar '' , if a user is not familiar with the phrase , the label would not be helpful .\nIn this paper , we propose a different strategy for partitioning search results , which addresses these two deficiencies through imposing a user-oriented partitioning of the search results .\nThat is , we try to figure out what aspects of a search topic are likely interesting to a user and organize the results accordingly .\nSpecifically , we propose to do the following : First , we will learn `` interesting aspects '' of similar topics from search logs and organize search results based on these `` interesting aspects '' .\nFor example , if the current query has occurred many times in the search logs , we can look at what kinds of pages viewed by the users in the results and what kind of words are used together with such a query .\nIn case when the query is ambiguous such as `` jaguar '' we can expect to see some clear clusters corresponding different senses of `` jaguar '' .\nMore importantly , even if a word is not ambiguous ( e.g. , `` car '' ) , we may still discover interesting aspects such as `` car rental '' and `` car pricing '' ( which happened to be the two primary aspects discovered in our search log data ) .\nSuch aspects can be very useful for organizing future search results about `` car '' .\nNote that in the case of `` car '' , clusters generated using regular clustering may not necessarily reflect such interesting aspects about `` car '' from a user 's perspective , even though the generated clusters are coherent and meaningful in other ways .\nSecond , we will generate more meaningful cluster labels using past query words entered by users .\nAssuming that the past search logs can help us learn what specific aspects are interesting to users given the current query topic , we could also expect that those query words entered by users in the past that are associated with the current query can provide meaningful descriptions of the distinct aspects .\nThus they can be better labels than those extracted from the ordinary contents of search results .\nTo implement the ideas presented above , we rely on search engine logs and build a history collection containing the past queries and the associated clickthroughs .\nGiven a new query , we find its related past queries from the history collection and learn aspects through applying the star clustering algorithm [ 2 ] to these past queries and clickthroughs .\nWe can then organize the search results into these aspects using categorization techniques and label each aspect by the most representative past query in the query cluster .\nWe evaluate our method for result organization using logs of a commercial search engine .\nWe compare our method with the default search engine ranking and the traditional clustering of search results .\nThe results show that our method is effective for improving search utility and the labels generated using past query words are more readable than those generated using traditional clustering approaches .\nThe rest of the paper is organized as follows .\nWe first review the related work in Section 2 .\nIn Section 3 , we describe search engine log data and our procedure of building a history collection .\nIn Section 4 , we present our approach in details .\nWe describe the data set in Section 5 and the experimental results are discussed in Section 6 .\nFinally , we conclude our paper and discuss future work in Section 7 .\n2 .\nRELATED WORK\nOur work is closely related to the study of clustering search results .\nIn [ 9 , 15 ] , the authors used Scatter/Gather algorithm to cluster the top documents returned from a traditional information retrieval system .\nTheir results validate the cluster hypothesis [ 20 ] that relevant documents tend to form clusters .\nThe system `` Grouper '' was described in [ 26 , 27 ] .\nIn these papers , the authors proposed to cluster the results of a real search engine based on the snippets or the contents of returned documents .\nSeveral clustering algorithms are compared and the Suffix Tree Clustering algorithm ( STC ) was shown to be the most effective one .\nThey also showed that using snippets is as effective as using whole documents .\nHowever , an important challenge of document clustering is to generate meaningful labels for clusters .\nTo overcome this difficulty , in [ 28 ] , supervised learning algorithms were studied to extract meaningful phrases from the search result snippets and these phrases were then used to group search results .\nIn [ 13 ] , the authors proposed to use a monothetic clustering algorithm , in which a document is assigned to a cluster based on a single feature , to organize search results , and the single feature is used to label the corresponding cluster .\nClustering search results has also attracted a lot of attention in industry and commercial Web services such as Vivisimo [ 22 ] .\nHowever , in all these works , the clusters are generated solely based on the search results .\nThus the obtained clusters do not necessarily reflect users ' preferences and the generated labels may not be informative from a user 's viewpoint .\nMethods of organizing search results based on text categorization are studied in [ 6 , 8 ] .\nIn this work , a text classifier is trained using a Web directory and search results are then classified into the predefined categories .\nThe authors designed and studied different category interfaces and they found that category interfaces are more effective than list interfaces .\nHowever predefined categories are often too general to reflect the finer granularity aspects of a query .\nSearch logs have been exploited for several different purposes in the past .\nFor example , clustering search queries to find those Frequent Asked Questions ( FAQ ) is studied in [ 24 , 4 ] .\nRecently , search logs have been used for suggesting query substitutes [ 12 ] , personalized search [ 19 ] , Web site design [ 3 ] , Latent Semantic Analysis [ 23 ] , and learning retrieval ranking functions [ 16 , 10 , 1 ] .\nIn our work , we explore past query history in order to better organize the search results for future queries .\nWe use the star clustering algorithm [ 2 ] , which is a graph partition based approach , to learn interesting aspects from search logs given a new query .\nThus past queries are clustered in a query specific manner and this is another difference from previous works such as [ 24 , 4 ] in which all queries in logs are clustered in an offline batch manner .\n3 .\nSEARCH ENGINE LOGS\n4 .\nOUR APPROACH\n4.1 Finding Related Past Queries\n4.2 Learning Aspects by Clustering\n4.2.1 Star Clustering\n4.3 Categorizing Search Results\n5 .\nDATA COLLECTION\n6 .\nEXPERIMENTS\n6.1 Experimental Design\n6.2 Experimental Results\n6.2.1 Overall performance\n6.2.2 Detailed Analysis\n6.2.3 Parameter Setting\n6.2.4 An Illustrative Example\n6.2.5 Labeling Comparison\n7 .\nCONCLUSIONS AND FUTURE WORK\nIn this paper , we studied the problem of organizing search results in a user-oriented manner .\nTo attain this goal , we rely on search engine logs to learn interesting aspects from users ' perspective .\nGiven a query , we retrieve its related\nqueries from past query history , learn the aspects by clustering the past queries and the associated clickthrough information , and categorize the search results into the aspects learned .\nWe compared our log-based method with the traditional cluster-based method and the baseline of search engine ranking .\nThe experiments show that our log-based method can consistently outperform cluster-based method and improve over the ranking baseline , especially when the queries are difficult or the search results are diverse .\nFurthermore , our log-based method can generate more meaningful aspect labels than the cluster labels generated based on search results when we cluster search results .\nThere are several interesting directions for further extending our work : First , although our experiment results have clearly shown promise of the idea of learning from search logs to organize search results , the methods we have experimented with are relatively simple .\nIt would be interesting to explore other potentially more effective methods .\nIn particular , we hope to develop probabilistic models for learning aspects and organizing results simultaneously .\nSecond , with the proposed way of organizing search results , we can expect to obtain informative feedback information from a user ( e.g. , the aspect chosen by a user to view ) .\nIt would thus be interesting to study how to further improve the organization of the results based on such feedback information .\nFinally , we can combine a general search log with any personal search log to customize and optimize the organization of search results for each individual user .", "lvl-4": "Learn from Web Search Logs to Organize Search Results\nABSTRACT\nEffective organization of search results is critical for improving the utility of any search engine .\nClustering search results is an effective way to organize search results , which allows a user to navigate into relevant documents quickly .\nHowever , two deficiencies of this approach make it not always work well : ( 1 ) the clusters discovered do not necessarily correspond to the interesting aspects of a topic from the user 's perspective ; and ( 2 ) the cluster labels generated are not informative enough to allow a user to identify the right cluster .\nIn this paper , we propose to address these two deficiencies by ( 1 ) learning `` interesting aspects '' of a topic from Web search logs and organizing search results accordingly ; and ( 2 ) generating more meaningful cluster labels using past query words entered by users .\nWe evaluate our proposed method on a commercial search engine log data .\nCompared with the traditional methods of clustering search results , our method can give better result organization and more meaningful labels .\n1 .\nINTRODUCTION\nThe utility of a search engine is affected by multiple factors .\nWhile the primary factor is the soundness of the underlying retrieval model and ranking function , how to organize and present search results is also a very important factor that can affect the utility of a search engine significantly .\nCompared with the vast amount of literature on retrieval models , however , there is relatively little research on how to improve the effectiveness of search result organization .\nThe most common strategy of presenting search results is a simple ranked list .\nIntuitively , such a presentation strategy is reasonable for non-ambiguous , homogeneous search\nresults ; in general , it would work well when the search results are good and a user can easily find many relevant documents in the top ranked results .\nIn these examples , a clustering view of the search results would be much more useful to a user than a simple ranked list .\nClustering is also useful when the search results are poor , in which case , a user would otherwise have to go through a long list sequentially to reach the very first relevant document .\nAs a primary alternative strategy for presenting search results , clustering search results has been studied relatively extensively [ 9 , 15 , 26 , 27 , 28 ] .\nThe general idea in virtually all the existing work is to perform clustering on a set of topranked search results to partition the results into natural clusters , which often correspond to different subtopics of the general query topic .\nA label will be generated to indicate what each cluster is about .\nA user can then view the labels to decide which cluster to look into .\nHowever , this clustering strategy has two deficiencies which make it not always work well : First , the clusters discovered in this way do not necessarily correspond to the interesting aspects of a topic from the user 's perspective .\nBut the clusters discovered by the current methods may partition the results into `` local codes '' and `` international codes . ''\nSuch clusters would not be very useful for users ; even the best cluster would still have a low precision .\nSecond , the cluster labels generated are not informative enough to allow a user to identify the right cluster .\nThere are two reasons for this problem : ( 1 ) The clusters are not corresponding to a user 's interests , so their labels would not be very meaningful or useful .\nFor example , the ambiguous query `` jaguar '' may mean an animal or a car .\nA cluster may be labeled as `` panthera onca . ''\nIn this paper , we propose a different strategy for partitioning search results , which addresses these two deficiencies through imposing a user-oriented partitioning of the search results .\nThat is , we try to figure out what aspects of a search topic are likely interesting to a user and organize the results accordingly .\nSpecifically , we propose to do the following : First , we will learn `` interesting aspects '' of similar topics from search logs and organize search results based on these `` interesting aspects '' .\nFor example , if the current query has occurred many times in the search logs , we can look at what kinds of pages viewed by the users in the results and what kind of words are used together with such a query .\nIn case when the query is ambiguous such as `` jaguar '' we can expect to see some clear clusters corresponding different senses of `` jaguar '' .\nSuch aspects can be very useful for organizing future search results about `` car '' .\nSecond , we will generate more meaningful cluster labels using past query words entered by users .\nThus they can be better labels than those extracted from the ordinary contents of search results .\nTo implement the ideas presented above , we rely on search engine logs and build a history collection containing the past queries and the associated clickthroughs .\nGiven a new query , we find its related past queries from the history collection and learn aspects through applying the star clustering algorithm [ 2 ] to these past queries and clickthroughs .\nWe can then organize the search results into these aspects using categorization techniques and label each aspect by the most representative past query in the query cluster .\nWe evaluate our method for result organization using logs of a commercial search engine .\nWe compare our method with the default search engine ranking and the traditional clustering of search results .\nThe results show that our method is effective for improving search utility and the labels generated using past query words are more readable than those generated using traditional clustering approaches .\nThe rest of the paper is organized as follows .\nWe first review the related work in Section 2 .\nIn Section 3 , we describe search engine log data and our procedure of building a history collection .\nIn Section 4 , we present our approach in details .\nWe describe the data set in Section 5 and the experimental results are discussed in Section 6 .\nFinally , we conclude our paper and discuss future work in Section 7 .\n2 .\nRELATED WORK\nOur work is closely related to the study of clustering search results .\nIn [ 9 , 15 ] , the authors used Scatter/Gather algorithm to cluster the top documents returned from a traditional information retrieval system .\nTheir results validate the cluster hypothesis [ 20 ] that relevant documents tend to form clusters .\nIn these papers , the authors proposed to cluster the results of a real search engine based on the snippets or the contents of returned documents .\nSeveral clustering algorithms are compared and the Suffix Tree Clustering algorithm ( STC ) was shown to be the most effective one .\nThey also showed that using snippets is as effective as using whole documents .\nHowever , an important challenge of document clustering is to generate meaningful labels for clusters .\nTo overcome this difficulty , in [ 28 ] , supervised learning algorithms were studied to extract meaningful phrases from the search result snippets and these phrases were then used to group search results .\nIn [ 13 ] , the authors proposed to use a monothetic clustering algorithm , in which a document is assigned to a cluster based on a single feature , to organize search results , and the single feature is used to label the corresponding cluster .\nClustering search results has also attracted a lot of attention in industry and commercial Web services such as Vivisimo [ 22 ] .\nHowever , in all these works , the clusters are generated solely based on the search results .\nThus the obtained clusters do not necessarily reflect users ' preferences and the generated labels may not be informative from a user 's viewpoint .\nMethods of organizing search results based on text categorization are studied in [ 6 , 8 ] .\nIn this work , a text classifier is trained using a Web directory and search results are then classified into the predefined categories .\nThe authors designed and studied different category interfaces and they found that category interfaces are more effective than list interfaces .\nHowever predefined categories are often too general to reflect the finer granularity aspects of a query .\nSearch logs have been exploited for several different purposes in the past .\nFor example , clustering search queries to find those Frequent Asked Questions ( FAQ ) is studied in [ 24 , 4 ] .\nIn our work , we explore past query history in order to better organize the search results for future queries .\nWe use the star clustering algorithm [ 2 ] , which is a graph partition based approach , to learn interesting aspects from search logs given a new query .\n7 .\nCONCLUSIONS AND FUTURE WORK\nIn this paper , we studied the problem of organizing search results in a user-oriented manner .\nTo attain this goal , we rely on search engine logs to learn interesting aspects from users ' perspective .\nGiven a query , we retrieve its related\nqueries from past query history , learn the aspects by clustering the past queries and the associated clickthrough information , and categorize the search results into the aspects learned .\nWe compared our log-based method with the traditional cluster-based method and the baseline of search engine ranking .\nThe experiments show that our log-based method can consistently outperform cluster-based method and improve over the ranking baseline , especially when the queries are difficult or the search results are diverse .\nFurthermore , our log-based method can generate more meaningful aspect labels than the cluster labels generated based on search results when we cluster search results .\nThere are several interesting directions for further extending our work : First , although our experiment results have clearly shown promise of the idea of learning from search logs to organize search results , the methods we have experimented with are relatively simple .\nIt would be interesting to explore other potentially more effective methods .\nIn particular , we hope to develop probabilistic models for learning aspects and organizing results simultaneously .\nSecond , with the proposed way of organizing search results , we can expect to obtain informative feedback information from a user ( e.g. , the aspect chosen by a user to view ) .\nIt would thus be interesting to study how to further improve the organization of the results based on such feedback information .\nFinally , we can combine a general search log with any personal search log to customize and optimize the organization of search results for each individual user .", "lvl-2": "Learn from Web Search Logs to Organize Search Results\nABSTRACT\nEffective organization of search results is critical for improving the utility of any search engine .\nClustering search results is an effective way to organize search results , which allows a user to navigate into relevant documents quickly .\nHowever , two deficiencies of this approach make it not always work well : ( 1 ) the clusters discovered do not necessarily correspond to the interesting aspects of a topic from the user 's perspective ; and ( 2 ) the cluster labels generated are not informative enough to allow a user to identify the right cluster .\nIn this paper , we propose to address these two deficiencies by ( 1 ) learning `` interesting aspects '' of a topic from Web search logs and organizing search results accordingly ; and ( 2 ) generating more meaningful cluster labels using past query words entered by users .\nWe evaluate our proposed method on a commercial search engine log data .\nCompared with the traditional methods of clustering search results , our method can give better result organization and more meaningful labels .\n1 .\nINTRODUCTION\nThe utility of a search engine is affected by multiple factors .\nWhile the primary factor is the soundness of the underlying retrieval model and ranking function , how to organize and present search results is also a very important factor that can affect the utility of a search engine significantly .\nCompared with the vast amount of literature on retrieval models , however , there is relatively little research on how to improve the effectiveness of search result organization .\nThe most common strategy of presenting search results is a simple ranked list .\nIntuitively , such a presentation strategy is reasonable for non-ambiguous , homogeneous search\nresults ; in general , it would work well when the search results are good and a user can easily find many relevant documents in the top ranked results .\nHowever , when the search results are diverse ( e.g. , due to ambiguity or multiple aspects of a topic ) as is often the case in Web search , the ranked list presentation would not be effective ; in such a case , it would be better to group the search results into clusters so that a user can easily navigate into a particular interesting group .\nFor example , the results in the first page returned from Google for the ambiguous query `` jaguar '' ( as of Dec. 2nd , 2006 ) contain at least four different senses of `` jaguar '' ( i.e. , car , animal , software , and a sports team ) ; even for a more refined query such as `` jaguar team picture '' , the results are still quite ambiguous , including at least four different jaguar teams -- a wrestling team , a jaguar car team , Southwestern College Jaguar softball team , and Jacksonville Jaguar football team .\nMoreover , if a user wants to find a place to download a jaguar software , a query such as `` download jaguar '' is also not very effective as the dominating results are about downloading jaguar brochure , jaguar wallpaper , and jaguar DVD .\nIn these examples , a clustering view of the search results would be much more useful to a user than a simple ranked list .\nClustering is also useful when the search results are poor , in which case , a user would otherwise have to go through a long list sequentially to reach the very first relevant document .\nAs a primary alternative strategy for presenting search results , clustering search results has been studied relatively extensively [ 9 , 15 , 26 , 27 , 28 ] .\nThe general idea in virtually all the existing work is to perform clustering on a set of topranked search results to partition the results into natural clusters , which often correspond to different subtopics of the general query topic .\nA label will be generated to indicate what each cluster is about .\nA user can then view the labels to decide which cluster to look into .\nSuch a strategy has been shown to be more useful than the simple ranked list presentation in several studies [ 8 , 9 , 26 ] .\nHowever , this clustering strategy has two deficiencies which make it not always work well : First , the clusters discovered in this way do not necessarily correspond to the interesting aspects of a topic from the user 's perspective .\nFor example , users are often interested in finding either `` phone codes '' or `` zip codes '' when entering the query `` area codes . ''\nBut the clusters discovered by the current methods may partition the results into `` local codes '' and `` international codes . ''\nSuch clusters would not be very useful for users ; even the best cluster would still have a low precision .\nSecond , the cluster labels generated are not informative enough to allow a user to identify the right cluster .\nThere are two reasons for this problem : ( 1 ) The clusters are not corresponding to a user 's interests , so their labels would not be very meaningful or useful .\n( 2 ) Even if a cluster really corresponds to an interesting aspect of the topic , the label may not be informative because it is usually generated based on the contents in a cluster , and it is possible that the user is not very familiar with some of the terms .\nFor example , the ambiguous query `` jaguar '' may mean an animal or a car .\nA cluster may be labeled as `` panthera onca . ''\nAlthough this is an accurate label for a cluster with the `` animal '' sense of `` jaguar '' , if a user is not familiar with the phrase , the label would not be helpful .\nIn this paper , we propose a different strategy for partitioning search results , which addresses these two deficiencies through imposing a user-oriented partitioning of the search results .\nThat is , we try to figure out what aspects of a search topic are likely interesting to a user and organize the results accordingly .\nSpecifically , we propose to do the following : First , we will learn `` interesting aspects '' of similar topics from search logs and organize search results based on these `` interesting aspects '' .\nFor example , if the current query has occurred many times in the search logs , we can look at what kinds of pages viewed by the users in the results and what kind of words are used together with such a query .\nIn case when the query is ambiguous such as `` jaguar '' we can expect to see some clear clusters corresponding different senses of `` jaguar '' .\nMore importantly , even if a word is not ambiguous ( e.g. , `` car '' ) , we may still discover interesting aspects such as `` car rental '' and `` car pricing '' ( which happened to be the two primary aspects discovered in our search log data ) .\nSuch aspects can be very useful for organizing future search results about `` car '' .\nNote that in the case of `` car '' , clusters generated using regular clustering may not necessarily reflect such interesting aspects about `` car '' from a user 's perspective , even though the generated clusters are coherent and meaningful in other ways .\nSecond , we will generate more meaningful cluster labels using past query words entered by users .\nAssuming that the past search logs can help us learn what specific aspects are interesting to users given the current query topic , we could also expect that those query words entered by users in the past that are associated with the current query can provide meaningful descriptions of the distinct aspects .\nThus they can be better labels than those extracted from the ordinary contents of search results .\nTo implement the ideas presented above , we rely on search engine logs and build a history collection containing the past queries and the associated clickthroughs .\nGiven a new query , we find its related past queries from the history collection and learn aspects through applying the star clustering algorithm [ 2 ] to these past queries and clickthroughs .\nWe can then organize the search results into these aspects using categorization techniques and label each aspect by the most representative past query in the query cluster .\nWe evaluate our method for result organization using logs of a commercial search engine .\nWe compare our method with the default search engine ranking and the traditional clustering of search results .\nThe results show that our method is effective for improving search utility and the labels generated using past query words are more readable than those generated using traditional clustering approaches .\nThe rest of the paper is organized as follows .\nWe first review the related work in Section 2 .\nIn Section 3 , we describe search engine log data and our procedure of building a history collection .\nIn Section 4 , we present our approach in details .\nWe describe the data set in Section 5 and the experimental results are discussed in Section 6 .\nFinally , we conclude our paper and discuss future work in Section 7 .\n2 .\nRELATED WORK\nOur work is closely related to the study of clustering search results .\nIn [ 9 , 15 ] , the authors used Scatter/Gather algorithm to cluster the top documents returned from a traditional information retrieval system .\nTheir results validate the cluster hypothesis [ 20 ] that relevant documents tend to form clusters .\nThe system `` Grouper '' was described in [ 26 , 27 ] .\nIn these papers , the authors proposed to cluster the results of a real search engine based on the snippets or the contents of returned documents .\nSeveral clustering algorithms are compared and the Suffix Tree Clustering algorithm ( STC ) was shown to be the most effective one .\nThey also showed that using snippets is as effective as using whole documents .\nHowever , an important challenge of document clustering is to generate meaningful labels for clusters .\nTo overcome this difficulty , in [ 28 ] , supervised learning algorithms were studied to extract meaningful phrases from the search result snippets and these phrases were then used to group search results .\nIn [ 13 ] , the authors proposed to use a monothetic clustering algorithm , in which a document is assigned to a cluster based on a single feature , to organize search results , and the single feature is used to label the corresponding cluster .\nClustering search results has also attracted a lot of attention in industry and commercial Web services such as Vivisimo [ 22 ] .\nHowever , in all these works , the clusters are generated solely based on the search results .\nThus the obtained clusters do not necessarily reflect users ' preferences and the generated labels may not be informative from a user 's viewpoint .\nMethods of organizing search results based on text categorization are studied in [ 6 , 8 ] .\nIn this work , a text classifier is trained using a Web directory and search results are then classified into the predefined categories .\nThe authors designed and studied different category interfaces and they found that category interfaces are more effective than list interfaces .\nHowever predefined categories are often too general to reflect the finer granularity aspects of a query .\nSearch logs have been exploited for several different purposes in the past .\nFor example , clustering search queries to find those Frequent Asked Questions ( FAQ ) is studied in [ 24 , 4 ] .\nRecently , search logs have been used for suggesting query substitutes [ 12 ] , personalized search [ 19 ] , Web site design [ 3 ] , Latent Semantic Analysis [ 23 ] , and learning retrieval ranking functions [ 16 , 10 , 1 ] .\nIn our work , we explore past query history in order to better organize the search results for future queries .\nWe use the star clustering algorithm [ 2 ] , which is a graph partition based approach , to learn interesting aspects from search logs given a new query .\nThus past queries are clustered in a query specific manner and this is another difference from previous works such as [ 24 , 4 ] in which all queries in logs are clustered in an offline batch manner .\n3 .\nSEARCH ENGINE LOGS\nSearch engine logs record the activities of Web users , which reflect the actual users ' needs or interests when conducting\nTable 1 : Sample entries of search engine logs .\nDifferent ID 's mean different sessions .\nWeb search .\nThey generally have the following information : text queries that users submitted , the URLs that they clicked after submitting the queries , and the time when they clicked .\nSearch engine logs are separated by sessions .\nA session includes a single query and all the URLs that a user clicked after issuing the query [ 24 ] .\nA small sample of search log data is shown in Table 1 .\nOur idea of using search engine logs is to treat these logs as past history , learn users ' interests using this history data automatically , and represent their interests by representative queries .\nFor example , in the search logs , a lot of queries are related to `` car '' and this reflects that a large number of users are interested in information about `` car . ''\nDifferent users are probably interested in different aspects of `` car . ''\nSome are looking for renting a car , thus may submit a query like `` car rental '' ; some are more interested in buying a used car , and may submit a query like `` used car '' ; and others may care more about buying a car accessory , so they may use a query like `` car audio . ''\nBy mining all the queries which are related to the concept of `` car '' , we can learn the aspects that are likely interesting from a user 's perspective .\nAs an example , the following is some aspects about `` car '' learned from our search log data ( see Section 5 ) .\n1 .\ncar rental , hertz car rental , enterprise car rental , ... 2 .\ncar pricing , used car , car values , ... 3 .\ncar accidents , car crash , car wrecks , ... 4 .\ncar audio , car stereo , car speaker , ...\nIn order to learn aspects from search engine logs , we preprocess the raw logs to build a history data collection .\nAs shown above , search engine logs consist of sessions .\nEach session contains the information of the text query and the clicked Web page URLs , together with the time that the user did the clicks .\nHowever , this information is limited since URLs alone are not informative enough to tell the intended meaning of a submitted query accurately .\nTo gather rich information , we enrich each URL with additional text content .\nSpecifically , given the query in a session , we obtain its top-ranked results using the search engine from which we obtained our log data , and extract the snippets of the URLs that are clicked on according to the log information in the corresponding session .\nAll the titles , snippets , and URLs of the clicked Web pages of that query are used to represent the session .\nDifferent sessions may contain the same queries .\nThus the number of sessions could be quite huge and the information in the sessions with the same queries could be redundant .\nIn order to improve the scalability and reduce data sparseness , we aggregate all the sessions which contain exactly the same queries together .\nThat is , for each unique query , we build a `` pseudo-document '' which consists of all the descriptions of its clicks in all the sessions aggregated .\nThe keywords contained in the queries themselves can be regarded as brief summaries of the pseudo-documents .\nAll these pseudo-documents form our history data collection , which is used to learn interesting aspects in the following section .\n4 .\nOUR APPROACH\nOur approach is to organize search results by aspects learned from search engine logs .\nGiven an input query , the general procedure of our approach is :\n1 .\nGet its related information from search engine logs .\nAll the information forms a working set .\n2 .\nLearn aspects from the information in the working set .\nThese aspects correspond to users ' interests given the input query .\nEach aspect is labeled with a representative query .\n3 .\nCategorize and organize the search results of the input query according to the aspects learned above .\nWe now give a detailed presentation of each step .\n4.1 Finding Related Past Queries\nGiven a query q , a search engine will return a ranked list of Web pages .\nTo know what the users are really interested in given this query , we first retrieve its past similar queries in our preprocessed history data collection .\nFormally , assume we have N pseudo-documents in our history data set : H = { Q1 , Q2 , ... , QN } .\nEach Qi corresponds to a unique query and is enriched with clickthrough information as discussed in Section 3 .\nTo find q 's related queries in H , a natural way is to use a text retrieval algorithm .\nHere we use the OKAPI method [ 17 ] , one of the state-of-the-art retrieval methods .\nSpecifically , we use the following formula to calculate the similarity between query q and pseudo-document Qi : c ( w , q ) \u00d7 IDF ( w ) \u00d7 ( k1 + 1 ) \u00d7 c ( w , Qi ) wEq Qi where k1 and b are OKAPI parameters set empirically , c ( w , Qi ) and c ( w , q ) are the count of word w in Qi and q respectively , IDF ( w ) is the inverse document frequency of word w , and avdl is the average document length in our history collection .\nBased on the similarity scores , we rank all the documents in H .\nThe top ranked documents provide us a working set to learn the aspects that users are usually interested in .\nEach document in H corresponds to a past query , and thus the top ranked documents correspond to q 's related past queries .\n4.2 Learning Aspects by Clustering\nGiven a query q , we use Hq = { d1 , ... , dn } to represent the top ranked pseudo-documents from the history collection H .\nThese pseudo-documents contain the aspects that users are interested in .\nIn this subsection , we propose to use a clustering method to discover these aspects .\nAny clustering algorithm could be applied here .\nIn this paper , we use an algorithm based on graph partition : the star clustering algorithm [ 2 ] .\nA good property of the star clustering in our setting is that it can suggest a good label for each cluster naturally .\nWe describe the star clustering algorithm below .\n4.2.1 Star Clustering\nGiven Hq , star clustering starts with constructing a pairwise similarity graph on this collection based on the vector space model in information retrieval [ 18 ] .\nThen the clusters are formed by dense subgraphs that are star-shaped .\nThese clusters form a cover of the similarity graph .\nFormally , for each of the n pseudo-documents { d1 , ... , dn } in the collection Hq , we compute a TF-IDF vector .\nThen , for each pair of documents di and dj ( i = j ) , their similarity is computed as the cosine score of their corresponding vectors vi and vj , that is sim ( di , dj ) = cos ( vi , vj ) = vi \u00b7 vj | vi | \u00b7 | vj A similarity graph G can then be constructed as follows using a similarity threshold parameter v. Each document di is a vertex of G .\nIf sim ( di , dj ) > v , there would be an edge connecting the corresponding two vertices .\nAfter the similarity graph G is built , the star clustering algorithm clusters the documents using a greedy algorithm as follows :\n1 .\nAssociate every vertex in G with a flag , initialized as unmarked .\n2 .\nFrom those unmarked vertices , find the one which has the highest degree and let it be u. 3 .\nMark the flag of u as center .\n4 .\nForm a cluster C containing u and all its neighbors that are not marked as center .\nMark all the selected neighbors as satellites .\n5 .\nRepeat from step 2 until all the vertices in G are marked .\nEach cluster is star-shaped , which consists a single center and several satellites .\nThere is only one parameter v in the star clustering algorithm .\nA big v enforces that the connected documents have high similarities , and thus the clusters tend to be small .\nOn the other hand , a small v will make the clusters big and less coherent .\nWe will study the impact of this parameter in our experiments .\nA good feature of the star clustering algorithm is that it outputs a center for each cluster .\nIn the past query collection Hq , each document corresponds to a query .\nThis center query can be regarded as the most representative one for the whole cluster , and thus provides a label for the cluster naturally .\nAll the clusters obtained are related to the input query q from different perspectives , and they represent the possible aspects of interests about query q of users .\n4.3 Categorizing Search Results\nIn order to organize the search results according to users ' interests , we use the learned aspects from the related past queries to categorize the search results .\nGiven the top m Web pages returned by a search engine for q : { s1 , ... , sm } , we group them into different aspects using a categorization algorithm .\nIn principle , any categorization algorithm can be used here .\nHere we use a simple centroid-based method for categorization .\nNaturally , more sophisticated methods such as SVM [ 21 ] may be expected to achieve even better performance .\nBased on the pseudo-documents in each discovered aspect Ci , we build a centroid prototype pi by taking the average of all the vectors of the documents in Ci :\nAll these pi 's are used to categorize the search results .\nSpecifically , for any search result sj , we build a TF-IDF vector .\nThe centroid-based method computes the cosine similarity between the vector representation of sj and each centroid prototype pi .\nWe then assign sj to the aspect with which it has the highest cosine similarity score .\nAll the aspects are finally ranked according to the number of search results they have .\nWithin each aspect , the search results are ranked according to their original search engine ranking .\n5 .\nDATA COLLECTION\nWe construct our data set based on the MSN search log data set released by the Microsoft Live Labs in 2006 [ 14 ] .\nIn total , this log data spans 31 days from 05/01/2006 to 05/31/2006 .\nThere are 8,144,000 queries , 3,441,000 distinct queries , and 4,649,000 distinct URLs in the raw data .\nTo test our algorithm , we separate the whole data set into two parts according to the time : the first 2/3 data is used to simulate the historical data that a search engine accumulated , and we use the last 1/3 to simulate future queries .\nIn the history collection , we clean the data by only keeping those frequent , well-formatted , English queries ( queries which only contain characters ` a ' , ` b ' , ... , ` z ' , and space , and appear more than 5 times ) .\nAfter cleaning , we get 169,057 unique queries in our history data collection totally .\nOn average , each query has 3.5 distinct clicks .\nWe build the `` pseudo-documents '' for all these queries as described in Section 3 .\nThe average length of these pseudo-documents is 68 words and the total data size of our history collection is 129MB .\nWe construct our test data from the last 1/3 data .\nAccording to the time , we separate this data into two test sets equally for cross-validation to set parameters .\nFor each test set , we use every session as a test case .\nEach session contains a single query and several clicks .\n( Note that we do not aggregate sessions for test cases .\nDifferent test cases may have the same queries but possibly different clicks . )\nSince it is infeasible to ask the original user who submitted a query to judge the results for the query , we follow the work [ 11 ] and opt to use the clicks associated with the query in a session to approximate relevant documents .\nUsing clicks as judgments , we can then compare different algorithms for organizing search results to see how well these algorithms can help users reach the clicked URLs .\nOrganizing search results into different aspects is expected to help informational queries .\nIt thus makes sense to focus on the informational queries in our evaluation .\nFor each test case , i.e. , each session , we count the number of different clicks and filter out those test cases with fewer than 4 clicks under the assumption that a query with more clicks is more likely to be an informational query .\nSince we want to test whether our algorithm can learn from the past queries , we also filter out those test cases whose queries can not retrieve at least 100 pseudo-documents from our history collection .\nFinally , we obtain 172 and 177 test cases in the first and\nsecond test sets respectively .\nOn average , we have 6.23 and 5.89 clicks for each test case in the two test sets respectively .\n6 .\nEXPERIMENTS\nIn the section , we describe our experiments on the search result organization based past search engine logs .\n6.1 Experimental Design\nWe use two baseline methods to evaluate the proposed method for organizing search results .\nFor each test case , the first method is the default ranked list from a search engine ( baseline ) .\nThe second method is to organize the search results by clustering them ( cluster-based ) .\nFor fair comparison , we use the same clustering algorithm as our logbased method ( i.e. , star clustering ) .\nThat is , we treat each search result as a document , construct the similarity graph , and find the star-shaped clusters .\nWe compare our method ( log-based ) with the two baseline methods in the following experiments .\nFor both cluster-based and log-based methods , the search results within each cluster is ranked based on their original ranking given by the search engine .\nTo compare different result organization methods , we adopt a similar method as in the paper [ 9 ] .\nThat is , we compare the quality ( e.g. , precision ) of the best cluster , which is defined as the one with the largest number of relevant documents .\nOrganizing search results into clusters is to help users navigate into relevant documents quickly .\nThe above metric is to simulate a scenario when users always choose the right cluster and look into it .\nSpecifically , we download and organize the top 100 search results into aspects for each test case .\nWe use Precision at 5 documents ( P@5 ) in the best cluster as the primary measure to compare different methods .\nP@5 is a very meaningful measure as it tells us the perceived precision when the user opens a cluster and looks at the first 5 documents .\nWe also use Mean Reciprocal Rank ( MRR ) as another metric .\nMRR is calculated as\nwhere T is a set of test queries , rq is the rank of the first relevant document for q. To give a fair comparison across different organization algorithms , we force both cluster-based and log-based methods to output the same number of aspects and force each search result to be in one and only one aspect .\nThe number of aspects is fixed at 10 in all the following experiments .\nThe star clustering algorithm can output different number of clusters for different input .\nTo constrain the number of clusters to 10 , we order all the clusters by their sizes , select the top 10 as aspect candidates .\nWe then re-assign each search result to one of these selected 10 aspects that has the highest similarity score with the corresponding aspect centroid .\nIn our experiments , we observe that the sizes of the best clusters are all larger than 5 , and this ensures that P@5 is a meaningful metric .\n6.2 Experimental Results\nOur main hypothesis is that organizing search results based on the users ' interests learned from a search log data set is more beneficial than to organize results using a simple list or cluster search results .\nIn the following , we test our hypothesis from two perspectives -- organization and labeling .\nTable 2 : Comparison of different methods by MMR and P@5 .\nWe also show the percentage of relative improvement in the lower part .\nTable 3 : Pairwise comparison w.r.t the number of test cases whose P@5\u2019s are improved versus decreased w.r.t the baseline .\n6.2.1 Overall performance\nWe compare three methods , basic search engine ranking ( baseline ) , traditional clustering based method ( clusterbased ) , and our log based method ( log-based ) , in Table 2 using MRR and P@5 .\nWe optimize the parameter a 's for each collection individually based on P@5 values .\nThis shows the best performance that each method can achieve .\nIn this table , we can see that in both test collections , our method is better than both the `` baseline '' and the `` cluster-based '' methods .\nFor example , in the first test collection , the baseline method of MMR is 0.734 , the cluster-based method is 0.773 and our method is 0.783 .\nWe achieve higher accuracy than both cluster-based method ( 1.27 % improvement ) and the baseline method ( 6.62 % improvement ) .\nThe P@5 values are 0.332 for the baseline , 0.316 for cluster-based method , but 0.353 for our method .\nOur method improves over the baseline by 6.31 % , while the cluster-based method even decreases the accuracy .\nThis is because cluster-based method organizes the search results only based on the contents .\nThus it could organize the results differently from users ' preferences .\nThis confirms our hypothesis of the bias of the cluster-based method .\nComparing our method with the cluster-based method , we achieve significant improvement on both test collections .\nThe p-values of the significance tests based on P@5 on both collections are 0.01 and 0.02 respectively .\nThis shows that our log-based method is effective to learn users ' preferences from the past query history , and thus it can organize the search results in a more useful way to users .\nWe showed the optimal results above .\nTo test the sensitivity of the parameter v of our log-based method , we use one of the test sets to tune the parameter to be optimal and then use the tuned parameter on the other set .\nWe compare this result ( log tuned outside ) with the optimal results of both cluster-based ( cluster optimized ) and log-based methods ( log optimized ) in Figure 1 .\nWe can see that , as expected , the performance using the parameter tuned on a separate set is worse than the optimal performance .\nHowever , our method still performs much better than the optimal results of cluster-based method on both test collections .\n1 rq\nFigure 1 : Results using parameters tuned from the other test collection .\nWe compare it with the optimal performance of the cluster-based and our logbased methods .\nFigure 2 : The correlation between performance change and result diversity .\nIn Table 3 , we show pairwise comparisons of the three methods in terms of the numbers of test cases for which P@5 is increased versus decreased .\nWe can see that our method improves more test cases compared with the other two methods .\nIn the next section , we show more detailed analysis to see what types of test cases can be improved by our method .\n6.2.2 Detailed Analysis\nTo better understand the cases where our log-based method can improve the accuracy , we test two properties : result diversity and query difficulty .\nAll the analysis below is based on test set 1 .\nDiversity Analysis : Intuitively , organizing search results into different aspects is more beneficial to those queries whose results are more diverse , as for such queries , the results tend to form two or more big clusters .\nIn order to test the hypothesis that log-based method help more those queries with diverse results , we compute the size ratios of the biggest and second biggest clusters in our log-based results and use this ratio as an indicator of diversity .\nIf the ratio is small , it means that the first two clusters have a small difference thus the results are more diverse .\nIn this case , we would expect our method to help more .\nThe results are shown in Figure 2 .\nIn this figure , we partition the ratios into 4 bins .\nThe 4 bins correspond to the ratio ranges [ 1 , 2 ) , [ 2 , 3 ) , [ 3 , 4 ) , and [ 4 , + oo ) respectively .\n( [ i , j ) means that i < ratio < j. ) In each bin , we count the numbers of test cases whose P@5\u2019s are improved versus decreased with respect to the ranking baseline , and plot the numbers in this figure .\nWe can observe that when the ratio is smaller , the log-based method can improve more test cases .\nBut when\nFigure 3 : The correlation between performance change and query difficulty .\nthe ratio is large , the log-based method can not improve over the baseline .\nFor example , in bin 1 , 48 test cases are improved and 34 are decreased .\nBut in bin 4 , all the 4 test cases are decreased .\nThis confirms our hypothesis that our method can help more if the query has more diverse results .\nThis also suggests that we should `` turn off '' the option of re-organizing search results if the results are not very diverse ( e.g. , as indicated by the cluster size ratio ) .\nDifficulty Analysis : Difficult queries have been studied in recent years [ 7 , 25 , 5 ] .\nHere we analyze the effectiveness of our method in helping difficult queries .\nWe quantify the query difficulty by the Mean Average Precision ( MAP ) of the original search engine ranking for each test case .\nWe then order the 172 test cases in test set 1 in an increasing order of MAP values .\nWe partition the test cases into 4 bins with each having a roughly equal number of test cases .\nA small MAP means that the utility of the original ranking is low .\nBin 1 contains those test cases with the lowest MAP 's and bin 4 contains those test cases with the highest MAP 's .\nFor each bin , we compute the numbers of test cases whose P@5\u2019s are improved versus decreased .\nFigure 3 shows the results .\nClearly , in bin 1 , most of the test cases are improved ( 24 vs 3 ) , while in bin 4 , log-based method may decrease the performance ( 3 vs 20 ) .\nThis shows that our method is more beneficial to difficult queries , which is as expected since clustering search results is intended to help difficult queries .\nThis also shows that our method does not really help easy queries , thus we should turn off our organization option for easy queries .\n6.2.3 Parameter Setting\nWe examine parameter sensitivity in this section .\nFor the star clustering algorithm , we study the similarity threshold parameter v. For the OKAPI retrieval function , we study the parameters k1 and b .\nWe also study the impact of the number of past queries retrieved in our log-based method .\nFigure 4 shows the impact of the parameter v for both cluster-based and log-based methods on both test sets .\nWe vary v from 0.05 to 0.3 with step 0.05 .\nFigure 4 shows that the performance is not very sensitive to the parameter v .\nWe can always obtain the best result in range 0.1 < v < 0.25 .\nIn Table 4 , we show the impact of OKAPI parameters .\nWe vary k1 from 1.0 to 2.0 with step 0.2 and b from 0 to 1 with step 0.2 .\nFrom this table , it is clear that P@5 is also not very sensitive to the parameter setting .\nMost of the values are larger than 0.35 .\nThe default values k1 = 1.2 and b = 0.8 give approximately optimal results .\nWe further study the impact of the amount of history\nFigure 4 : The impact of similarity threshold v on both cluster-based and log-based methods .\nWe show the result on both test collections .\nTable 4 : Impact of OKAPI parameters k1 and b.\ninformation to learn from by varying the number of past queries to be retrieved for learning aspects .\nThe results on both test collections are shown in Figure 5 .\nWe can see that the performance gradually increases as we enlarge the number of past queries retrieved .\nThus our method could potentially learn more as we accumulate more history .\nMore importantly , as time goes , more and more queries will have sufficient history , so we can improve more and more queries .\n6.2.4 An Illustrative Example\nWe use the query `` area codes '' to show the difference in the results of the log-based method and the cluster-based method .\nThis query may mean `` phone codes '' or `` zip codes '' .\nTable 5 shows the representative keywords extracted from the three biggest clusters of both methods .\nIn the clusterbased method , the results are partitioned based on locations : local or international .\nIn the log-based method , the results are disambiguated into two senses : `` phone codes '' or `` zip codes '' .\nWhile both are reasonable partitions , our evaluation indicates that most users using such a query are often interested in either `` phone codes '' or `` zip codes . ''\nsince the P@5 values of cluster-based and log-based methods are 0.2 and 0.6 , respectively .\nTherefore our log-based method is more effective in helping users to navigate into their desired results .\nCluster-based method Log-based method city , state telephone , city , international local , area phone , dialing international zip , postal\nTable 5 : An example showing the difference between the cluster-based method and our log-based method\nFigure 5 : The impact of the number of past queries retrieved .\n6.2.5 Labeling Comparison\nWe now compare the labels between the cluster-based method and log-based method .\nThe cluster-based method has to rely on the keywords extracted from the snippets to construct the label for each cluster .\nOur log-based method can avoid this difficulty by taking advantage of queries .\nSpecifically , for the cluster-based method , we count the frequency of a keyword appearing in a cluster and use the most frequent keywords as the cluster label .\nFor log-based method , we use the center of each star cluster as the label for the corresponding cluster .\nIn general , it is not easy to quantify the readability of a cluster label automatically .\nWe use examples to show the difference between the cluster-based and the log-based methods .\nIn Table 6 , we list the labels of the top 5 clusters for two examples `` jaguar '' and `` apple '' .\nFor the cluster-based method , we separate keywords by commas since they do not form a phrase .\nFrom this table , we can see that our log-based method gives more readable labels because it generates labels based on users ' queries .\nThis is another advantage of our way of organizing search results over the clustering approach .\nTable 6 : Cluster label comparison .\n7 .\nCONCLUSIONS AND FUTURE WORK\nIn this paper , we studied the problem of organizing search results in a user-oriented manner .\nTo attain this goal , we rely on search engine logs to learn interesting aspects from users ' perspective .\nGiven a query , we retrieve its related\nqueries from past query history , learn the aspects by clustering the past queries and the associated clickthrough information , and categorize the search results into the aspects learned .\nWe compared our log-based method with the traditional cluster-based method and the baseline of search engine ranking .\nThe experiments show that our log-based method can consistently outperform cluster-based method and improve over the ranking baseline , especially when the queries are difficult or the search results are diverse .\nFurthermore , our log-based method can generate more meaningful aspect labels than the cluster labels generated based on search results when we cluster search results .\nThere are several interesting directions for further extending our work : First , although our experiment results have clearly shown promise of the idea of learning from search logs to organize search results , the methods we have experimented with are relatively simple .\nIt would be interesting to explore other potentially more effective methods .\nIn particular , we hope to develop probabilistic models for learning aspects and organizing results simultaneously .\nSecond , with the proposed way of organizing search results , we can expect to obtain informative feedback information from a user ( e.g. , the aspect chosen by a user to view ) .\nIt would thus be interesting to study how to further improve the organization of the results based on such feedback information .\nFinally , we can combine a general search log with any personal search log to customize and optimize the organization of search results for each individual user ."} {"id": "J-9", "title": "", "abstract": "", "keyphrases": ["econom theori", "empir and laboratori evid", "equilibrium price", "financi secur", "secur's valu", "comput process", "path toward equilibrium", "trader", "market price", "simplifi model", "trade strategi", "comput properti of the process", "secur", "payoff", "threshold function", "probabl distribut", "round", "bit number", "distribut inform", "lower bound", "worst case", "inform market", "distribut inform market", "market comput", "inform aggreg", "converg to equilibrium", "ration expect", "effici market hypothesi"], "prmu": [], "lvl-1": "Computation in a Distributed Information Market\u2217 Joan Feigenbaum \u2020 Yale University Department of Computer Science New Haven, CT 06520 feigenbaum@cs.yale.edu Lance Fortnow NEC Laboratories America 4 Independence Way Princeton, NJ 08540 fortnow@nec-labs.com David M. Pennock \u2021 Overture Services, Inc. 74 N. Pasadena Ave, 3rd floor Pasadena, CA 91103 david.pennock@overture.com Rahul Sami \u00a7 Yale University Department of Computer Science New Haven, CT 06520 sami@cs.yale.edu ABSTRACT According to economic theory-supported by empirical and laboratory evidence-the equilibrium price of a financial security reflects all of the information regarding the security``s value.\nWe investigate the computational process on the path toward equilibrium, where information distributed among traders is revealed step-by-step over time and incorporated into the market price.\nWe develop a simplified model of an information market, along with trading strategies, in order to formalize the computational properties of the process.\nWe show that securities whose payoffs cannot be expressed as weighted threshold functions of distributed input bits are not guaranteed to converge to the proper equilibrium predicted by economic theory.\nOn the other hand, securities whose payoffs are threshold functions are guaranteed to converge, for all prior probability distributions.\nMoreover, these threshold securities converge in at most n rounds, where n is the number of bits of distributed information.\nWe also prove a lower bound, showing a type of threshold security that requires at least n/2 rounds to converge in the worst case.\nCategories and Subject Descriptors F.m [Theory of Computation]: Miscellaneous; J.4 [Computer Applications]: Social and Behavioral SciencesEconomics; C.2.4 [Computer Systems Organization]: Computer-Communication Networks-Distributed Systems General Terms Economics, Theory 1.\nINTRODUCTION The strong form of the efficient markets hypothesis states that market prices nearly instantly incorporate all information available to all traders.\nAs a result, market prices encode the best forecasts of future outcomes given all information, even if that information is distributed across many sources.\nSupporting evidence can be found in empirical studies of options markets [14], political stock markets [7, 8, 22], sports betting markets [3, 9, 27], horse-racing markets [30], market games [23, 24], and laboratory investigations of experimental markets [6, 25, 26].\nThe process of information incorporation is, at its essence, a distributed computation.\nEach trader begins with his or her own information.\nAs trades are made, summary information is revealed through market prices.\nTraders learn or infer what information others are likely to have by observing prices, then update their own beliefs based on their observations.\nOver time, if the process works as advertised, all information is revealed, and all traders converge to the same information state.\nAt this point, the market is in what is called a rational expectations equilibrium [11, 16, 19].\nAll information available to all traders is now reflected in the going prices, and no further trades are desirable until some new information becomes available.\nWhile most markets are not designed with information aggregation as a primary motivation-for example, derivatives 156 markets are intended mainly for risk management and sports betting markets for entertainment-recently, some markets have been created solely for the purpose of aggregating information on a topic of interest.\nThe Iowa Electronic Market1 is a prime example, operated by the University of Iowa Tippie College of Business for the purpose of investigating how information about political elections distributed among traders gets reflected in securities prices whose payoffs are tied to actual election outcomes [7, 8].\nIn this paper, we investigate the nature of the computational process whereby distributed information is revealed and combined over time into the prices in information markets.\nTo do so, in Section 3, we propose a model of an information market that is tractable for theoretical analysis and, we believe, captures much of the important essence of real information markets.\nIn Section 4, we present our main theoretical results concerning this model.\nWe prove that only Boolean securities whose payoffs can be expressed as threshold functions of the distributed input bits of information are guaranteed to converge as predicted by rational expectations theory.\nBoolean securities with more complex payoffs may not converge under some prior distributions.\nWe also provide upper and lower bounds on the convergence time for these threshold securities.\nWe show that, for all prior distributions, the price of a threshold security converges to its rational expectations equilibrium price in at most n rounds, where n is the number of bits of distributed information.\nWe show that this worst-case bound is tight within a factor of two by illustrating a situation in which a threshold security requires n/2 rounds to converge.\n2.\nRELATIONSHIP TO RELATED WORK As mentioned, there is a great deal of documented evidence supporting the notion that markets are able to aggregate information in a number of scenarios using a variety of market mechanisms.\nThe theoretically ideal mechanism requires what is called a complete market.\nA complete market contains enough linearly independent securities to span the entire state space of interest [1, 31].\nThat is, the dimensionality of the available securities equals the dimensionality of the event space over which information is to be aggregated.2 In this ideal case, all private information becomes common knowledge in equilibrium, and thus any function of the private information can be directly evaluated by any agent or observer.\nHowever, this theoretical ideal is almost never achievable in practice, because it generally requires a number of securities exponential in the number of random variables of interest.\nWhen available securities form an incomplete market [17] in relation to the desired information space-as is usually the case-aggregation may be partial.\nNot all private information is revealed in equilibrium, and prices may not convey enough information to recover the complete joint probability distribution over all events.\nStill, it is generally assumed that aggregation does occur along the dimensions represented in the market; that is, prices do reflect a consistent projection of the entire joint distribution onto the smaller-dimensional space spanned by securities.\nIn this pa1 http://www.biz.uiowa.edu/iem/ 2 When we refer to independence or dimensionality of securities, we mean the independence or dimensionality of the random variables on which the security payoffs are based.\nper, we investigate cases in which even this partial aggregation fails.\nFor example, even though there is enough private information to determine completely the price of a security in the market, the equilibrium price may in fact reveal no information at all!\nSo characterizations of when a rational expectations equilibrium is fully revealing do not immediately apply to our problem.\nWe are not asking whether all possible functions of private information can be evaluated, but whether a particular target function can be evaluated.\nWe show that properties of the function itself play a major role, not just the relative dimensionalities of the information and security spaces.\nOur second main contribution is examining the dynamics of information aggregation before equilibrium, in particular proving upper and lower bounds on the time to convergence in those cases in which aggregation succeeds.\nShoham and Tennenholtz [29] define a rationally computable function as a function of agents'' valuations (types) that can be computed by a market, assuming agents follow rational equilibrium strategies.\nThe authors mainly consider auctions of goods as their basic mechanistic unit and examine the communication complexity involved in computing various functions of agents'' valuations of goods.\nFor example, they give auction mechanisms that can compute the maximum, minimum, and kth-highest of the agents'' valuations of a single good using 1, 1, and n \u2212 k + 1 bits of communication, respectively.\nThey also examine the potential tradeoff between communication complexity and revenue.\n3.\nMODEL OF AN INFORMATION MARKET To investigate the properties and limitations of the process whereby an information market converges toward its rational-expectations equilibrium, we formulate a representative model of the market.\nIn designing the model, our goals were two-fold: (1) to make the model rich enough to be realistic and (2) to make the model simple enough to admit meaningful analysis.\nAny modeling decisions must trade off these two generally conflicting goals, and the decision process is as much an art as a science.\nNonetheless, we believe that our model captures enough of the essence of real information markets to lend credence to the results that follow.\nIn this section, we present our modeling assumptions and justifications in detail.\nSection 3.1 describes the initial information state of the system, Section 3.2 covers the market mechanism, and Section 3.3 presents the agents'' strategies.\n3.1 Initial information state There are n agents (traders) in the system, each of whom is privy to one bit of information, denoted xi.\nThe vector of all n bits is denoted x = (x1, x2, ... , xn).\nIn the initial state, each agent is aware only of her own bit of information.\nAll agents have a common prior regarding the joint distribution of bits among agents, but none has any specific information about the actual value of bits held by others.\nNote that this common-prior assumption-typical in the economics literature-does not imply that all agents agree.\nTo the contrary, because each agent has different information, the initial state of the system is in general a state of disagreement.\nNearly any disagreement that could be modeled by assuming different priors can instead be mod157 eled by assuming a common prior with different information, and so the common-prior assumption is not as severe as it may seem.\n3.2 Market mechanism The security being traded by the agents is a financial instrument whose payoff is a function f(x) of the agents'' bits.\nThe form of f (the description of the security) is common knowledge3 among agents.\nWe sometimes refer to the xi as the input bits.\nAt some time in the future after trading is completed, the true value of f(x) is revealed,4 and every owner of the security is paid an amount f(x) in cash per unit owned.\nIf an agent ends with a negative quantity of the security (by selling short), then the agent must pay the amount f(x) in cash per unit.\nNote that if someone were to have complete knowledge of all input bits x, then that person would know the true value f(x) of the security with certainty, and so would be willing to buy it at any price lower than f(x) and (short) sell it at any price higher than f(x).5 Following Dubey, Geanakoplos, and Shubik [4], and Jackson and Peck [13], we model the market-price formation process as a multiperiod Shapley-Shubik market game [28].\nThe Shapley-Shubik process operates as follows: The market proceeds in synchronous rounds.\nIn each round, each agent i submits a bid bi and a quantity qi.\nThe semantics are that agent i is supplying a quantity qi of the security and an amount bi of money to be traded in the market.\nFor simplicity, we assume that there are no restrictions on credit or short sales, and so an agent``s trade is not constrained by her possessions.\nThe market clears in each round by settling at a single price that balances the trade in that round: The clearing price is p = i bi/ i qi.\nAt the end of the round, agent i holds a quantity qi proportional to the money she bid: qi = bi/p.\nIn addition, she is left with an amount of money bi that reflects her net trade at price p: bi = bi \u2212 p(qi \u2212 qi) = pqi.\nNote that agent i``s net trade in the security is a purchase if p < bi/qi and a sale if p > bi/qi.\nAfter each round, the clearing price p is publicly revealed.\nAgents then revise their beliefs according to any information garnered from the new price.\nThe next round proceeds as the previous.\nThe process continues until an equilibrium is reached, meaning that prices and bids do not change from one round to the next.\nIn this paper, we make a further simplifying restriction on the trading in each round: We assume that qi = 1 for each agent i.\nThis modeling assumption serves two analytical purposes.\nFirst, it ensures that there is forced trade in every round.\nClassic results in economics show that perfectly rational and risk-neutral agents will never trade with each other for purely speculative reasons (even if they have differing information) [20].\nThere are many factors that can induce rational agents to trade, such as differing degrees of risk aversion, the presence of other traders who are trading for liquidity reasons rather than speculative gain, or a market maker who is pumping money into the market through a subsidy.\nWe sidestep this issue by simply assuming that the 3 Common knowledge is information that all agents know, that all agents know that all agents know, and so on ad infinitum [5].\n4 The values of the input bits themselves may or may not be publicly revealed.\n5 Throughout this paper we ignore the time value of money.\ninformed agents will trade (for unspecified reasons).\nSecond, forcing qi = 1 for all i means that the total volume of trade and the impact of any one trader on the clearing price are common knowledge; the clearing price p is a simple function of the agents'' bids, p = i bi/n.\nWe will discuss the implications of alternative market models in Section 5.\n3.3 Agent strategies In order to draw formal conclusions about the price evolution process, we need to make some assumptions about how agents behave.\nEssentially we assume that agents are riskneutral, myopic,6 and bid truthfully: Each agent in each round bids his or her current valuation of the security, which is that agent``s estimation of the expected payoff of the security.\nExpectations are computed according to each agent``s probability distribution, which is updated via Bayes'' rule when new information (revealed via the clearing prices) becomes available.\nWe also assume that it is common knowledge that all the agents behave in the specified manner.\nWould rational agents actually behave according to this strategy?\nIt``s hard to say.\nCertainly, we do not claim that this is an equilibrium strategy in the game-theoretic sense.\nFurthermore, it is clear that we are ignoring some legitimate tactics, e.g., bidding falsely in one round in order to effect other agents'' judgments in the following rounds (nonmyopic reasoning).\nHowever, we believe that the strategy outlined is a reasonable starting point for analysis.\nSolving for a true game-theoretic equilibrium strategy in this setting seems extremely difficult.\nOur assumptions seem reasonable when there are enough agents in the system such that extremely complex meta-reasoning is not likely to improve upon simply bidding one``s true expected value.\nIn this case, according the the Shapley-Shubik mechanism, if the clearing price is below an agent``s expected value that agent will end up buying (increasing expected profit); otherwise, if the clearing price is above the agent``s expected value, the agent will end up selling (also increasing expected profit).\n4.\nCOMPUTATIONAL PROPERTIES In this section, we study the computational power of information markets for a very simple class of aggregation functions: Boolean functions of n variables.\nWe characterize the set of Boolean functions that can be computed in our market model for all prior distributions and then prove upper and lower bounds on the worst-case convergence time for these markets.\nThe information structure we assume is as follows: There are n agents, and each agent i has a single bit of private information xi.\nWe use x to denote the vector (x1, ... , xn) of inputs.\nAll the agents also have a common prior probability distribution P : {0, 1}n \u2192 [0, 1] over the values of x.\nWe define a Boolean aggregate function f(x) : {0, 1}n \u2192 {0, 1} that we would like the market to compute.\nNote that x, and hence f(x), is completely determined by the combination of all the agents'' information, but it is not known to any one agent.\nThe agents trade in a Boolean security F, which pays off $1 if f(x) = 1 and $0 if f(x) = 0.\nSo an omniscient 6 Risk neutrality implies that each agent``s utility for the security is linearly related to his or her subjective estimation of the expected payoff of the security.\nMyopic behavior means that agents treat each round as if it were the final round: They do not reason about how their bids may affect the bids of other agents in future rounds.\n158 agent with access to all the agents'' bits would know the true value of security F-either exactly $1 or exactly $0.\nIn reality, risk-neutral agents with limited information will value F according to their expectation of its payoff, or Ei[f(x)], where Ei is the expectation operator applied according to agent i``s probability distribution.\nFor any function f, trading in F may happen to converge to the true value of f(x) by coincidence if the prior probability distribution is sufficiently degenerate.\nMore interestingly, we would like to know for which functions f does the price of the security F always converge to f(x) for all prior probability distributions P.7 In Section 4.2, we prove a necessary and sufficient condition that guarantees convergence.\nIn Section 4.3, we address the natural follow-up question, by deriving upper and lower bounds on the worst-case number of rounds of trading required for the value of f(x) to be revealed.\n4.1 Equilibrium price characterization Our analysis builds on a characterization of the equilibrium price of F that follows from a powerful result on common knowledge of aggregates due to McKelvey and Page [19], later extended by Nielsen et al. [21].\nInformation markets aim to aggregate the knowledge of all the agents.\nProcedurally, this occurs because the agents learn from the markets: The price of the security conveys information to each agent about the knowledge of other agents.\nWe can model the flow of information through prices as follows.\nLet \u2126 = {0, 1}n be the set of possible values of x; we say that \u2126 denotes the set of possible states of the world.\nThe prior P defines everyone``s initial belief about the likelihood of each state.\nAs trading proceeds, some possible states can be logically ruled out, but the relative likelihoods among the remaining states are fully determined by the prior P.\nSo the common knowledge after any stage is completely described by the set of states that an external observer-with no information beyond the sequence of prices observed-considers possible (along with the prior).\nSimilarly, the knowledge of agent i at any point is also completely described by the set of states she considers possible.\nWe use the notation Sr to denote the common-knowledge possibility set after round r, and Sr i to denote the set of states that agent i considers possible after round r. Initially, the only common knowledge is that the input vector x is in \u2126; in other words, the set of states considered possible by an external observer before trading has occurred is the set S0 = \u2126.\nHowever, each agent i also knows the value of her bit xi; thus, her knowledge set S0 i is the set {y \u2208 \u2126|yi = xi}.\nAgent i``s first-round bid is her conditional expectation of the event f(x) = 1 given that x \u2208 S0 i .\nAll the agents'' bids are processed, and the clearing price p1 is announced.\nAn external observer could predict agent i``s bid if he knew the value of xi.\nThus, if he knew the value of x, he could predict the value of p1 .\nIn other words, the external observer knows the function price1 (x) that relates the first round price to the true state x. Of course, he does not know the value of x; however, he can rule out any vector x that would have resulted in a different clearing price from the observed price p1 .\n7 We assume that the common prior is consistent with x in the sense that it assigns a non-zero probability to the actual value of x. Thus, the common knowledge after round 1 is the set S1 = {y \u2208 S0 | price1 (y) = p1 }.\nAgent i knows the common knowledge and, in addition, knows the value of bit xi.\nHence, after every round r, the knowledge of agent i is given by Sr i = {y \u2208 Sr |yi = xi}.\nNote that, because knowledge can only improve over time, we must always have Sr i \u2286 Sr\u22121 i and Sr \u2286 Sr\u22121 .\nThus, only a finite number of changes in each agent``s knowledge are possible, and so eventually we must converge to an equilibrium after which no player learns any further information.\nWe use S\u221e to denote the common knowledge at this point, and S\u221e i to denote agent i``s knowledge at this point.\nLet p\u221e denote the clearing price at equilibrium.\nInformally, McKelvey and Page [19] show that, if n people with common priors but different information about the likelihood of some event A agree about a suitable aggregate of their individual conditional probabilities, then their individual conditional probabilities of event A``s occurring must be identical.\n(The precise definition of suitable is described below.)\nThere is a strong connection to rational expectation equilibria in markets, which was noted in the original McKelvey-Page paper: The market price of a security is common knowledge at the point of equilibrium.\nThus, if the price is a suitable aggregate of the conditional expectations of all the agents, then in equilibrium they must have identical conditional expectations of the event that the security will pay off.\n(Note that their information may still be different.)\nDefinition 1.\nA function g : n \u2192 is called stochastically monotone if it can be written in the form g(x) = i gi(xi), where each function gi : \u2192 is strictly increasing.\nBergin and Brandenburger [2] proved that this simple definition of stochastically monotone functions is equivalent to the original definition in McKelvey-Page [19].\nDefinition 2.\nA function g : n \u2192 is called stochastically regular if it can be written in the form g = h \u25e6 g , where g is stochastically monotone and h is invertible on the range of g .\nWe can now state the McKelvey-Page result, as generalized by Nielsen et al. [21].\nIn our context, the following simple theorem statement suffices; more general versions of this theorem can be found in [19, 21].\nTheorem 1.\n(Nielsen et al. [21]) Suppose that, at equilibrium, the n agents have a common prior, but possibly different information, about the value of a random variable F, as described above.\nFor all i, let p\u221e i = E(F|x \u2208 S\u221e i ).\nIf g is a stochastically regular function and g(p\u221e 1 , p\u221e 2 , ... , p\u221e n ) is common knowledge, then it must be the case that p\u221e 1 = p\u221e 2 = \u00b7 \u00b7 \u00b7 = p\u221e n = E(F|x \u2208 S\u221e ) = p\u221e In one round of our simplified Shapley-Shubik trading model, the announced price is the mean of the conditional expectations of the n agents.\nThe mean is a stochastically regular function; hence, Theorem 1 shows that, at equilibrium, all agents have identical conditional expectations of the payoff of the security.\nIt follows that the equilibrium 159 price p\u221e must be exactly the conditional expectations of all agents at equilibrium.\nTheorem 1 does not in itself say how the equilibrium is reached.\nMcKelvey and Page, extending an argument due to Geanakoplos and Polemarchakis [10], show that repeated announcement of the aggregate will eventually result in common knowledge of the aggregate.\nIn our context, this is achieved by announcing the current price at the end of each round; this will ultimately converge to a state in which all agents bid the same price p\u221e .\nHowever, reaching an equilibrium price is not sufficient for the purposes of information aggregation.\nWe also want the price to reveal the actual value of f(x).\nIt is possible that the equilibrium price p\u221e of the security F will not be either 0 or 1, and so we cannot infer the value of f(x) from it.\nExample 1: Consider two agents 1 and 2 with private input bits x1 and x2 respectively.\nSuppose the prior probability distribution is uniform, i.e., x = (x1, x2) takes the values (0, 0), (0, 1), (1, 0), and (1, 1) each with probability 1 4 .\nNow, suppose the aggregate function we want to compute is the XOR function, f(x) = x1 \u2295 x2.\nTo this end, we design a market to trade in a Boolean security F, which will eventually payoff $1 iff x1 \u2295 x2 = 1.\nIf agent 1 observes x1 = 1, she estimates the expected value of F to be the probability that x2 = 0 (given x1 = 1), which is 1 2 .\nIf she observes x1 = 0, her expectation of the value of F is the conditional probability that x2 = 1, which is also 1 2 .\nThus, in either case, agent 1 will bid 0.5 for F in the first round.\nSimilarly, agent 2 will also always bid 0.5 in the first round.\nHence, the first round of trading ends with a clearing price of 0.5.\nFrom this, agent 2 can infer that agent 1 bid 0.5, but this gives her no information about the value of x1-it is still equally likely to be 0 or 1.\nAgent 1 also gains no information from the first round of trading, and hence neither agent changes her bid in the following rounds.\nThus, the market reaches equilibrium at this point.\nAs predicted by Theorem 1, both agents have the same conditional expectation (0.5) at equilibrium.\nHowever, the equilibrium price of the security F does not reveal the value of f(x1, x2), even though the combination of agents'' information is enough to determine it precisely.\n4.2 Characterizing computable aggregates We now give a necessary and sufficient characterization of the class of functions f such that, for any prior distribution on x, the equilibrium price of F will reveal the true value of f.\nWe show that this is exactly the class of weighted threshold functions: Definition 3.\nA function f : {0, 1}n \u2192 {0, 1} is a weighted threshold function iff there are real constants w1, w2, ... , wn such that f(x) = 1 iff n i=1 wixi \u2265 1 Theorem 2.\nIf f is a weighted threshold function, then, for any prior probability distribution P, the equilibrium price of F is equal to f(x).\nProof: Let S\u221e i denote the possibility set of agent i at equilibrium.\nAs before, we use p\u221e to denote the final trading price at this point.\nNote that, by Theorem 1, p\u221e is exactly agent i``s conditional expectation of the value of f(x), given her final possibility set S\u221e i .\nFirst, observe that if p\u221e is 0 or 1, then we must have f(x) = p\u221e , regardless of the form of f. For instance, if p\u221e = 1, this means that E(f(y)|y \u2208 S\u221e ) = 1.\nAs f(\u00b7) can only take the values 0 or 1, it follows that P(f(y) = 1|y \u2208 S\u221e ) = 1.\nThe actual value x is always in the final possibility set S\u221e , and, furthermore, it must have non-zero prior probability, because it actually occurred.\nHence, it follows that f(x) = 1 in this case.\nAn identical argument shows that if p\u221e = 0, f(x) = 0.\nHence, it is enough to show that, if f is a weighted threshold function, then p\u221e is either 0 or 1.\nWe prove this by contradiction.\nLet f(\u00b7) be a weighted threshold function corresponding to weights {wi}, and assume that 0 < p\u221e < 1.\nBy Theorem 1, we must have: P(f(y) = 1|y \u2208 S\u221e ) = p\u221e (1) \u2200i P(f(y) = 1|y \u2208 S\u221e i ) = p\u221e (2) Recall that S\u221e i = {y \u2208 S\u221e |yi = xi}.\nThus, Equation (2) can be written as \u2200i P(f(y) = 1|y \u2208 S\u221e , yi = xi) = p\u221e (3) Now define J+ i = P(yi = 1|y \u2208 S\u221e , f(y) = 1) J\u2212 i = P(yi = 1|y \u2208 S\u221e , f(y) = 0) J+ = n i=1 wiJ+ i J\u2212 = n i=1 wiJ\u2212 i Because by assumption p\u221e = 0, 1, both J+ i and J\u2212 i are well-defined (for all i): Neither is conditioned on a zeroprobability event.\nClaim: Eqs.\n1 and 3 imply that J+ i = J\u2212 i , for all i. Proof of claim: We consider the two cases xi = 1 and xi = 0 separately.\nCase (i): xi = 1.\nWe can assume that J\u2212 i and J+ i are not both 0 (or else, the claim is trivially true).\nIn this case, we have P(f(y) = 1|y \u2208 S\u221e ) \u00b7 J+ i P(f(y) = 1|y \u2208 S\u221e) \u00b7 J+ i + P(f(y) = 0|y \u2208 S\u221e) \u00b7 J\u2212 i = P(f(y) = 1|yi = 1, y \u2208 S\u221e ) (Bayes'' law) p\u221e J+ i p\u221eJ+ i + (1 \u2212 p\u221e)J\u2212 i = p\u221e (by Eqs.\n1 and 3) J+ i = p\u221e J+ i + (1 \u2212 p\u221e )J\u2212 i =\u21d2 J+ i = J\u2212 i (as p\u221e = 1) Case (ii): xi = 0.\nWhen xi = 0, observe that the argument of Case (i) can be used to prove that (1 \u2212 J+ i ) = (1 \u2212 J\u2212 i ).\nIt immediately follows that J+ i = J\u2212 i as well.\n2 Hence, we must also have J+ = J\u2212 .\nBut using linearity of expectation, we can also write J+ as J+ = E n i=1 wiyi y \u2208 S\u221e , f(y) = 1 , 160 and, because f(y) = 1 only when i wiyi \u2265 1, this gives us J+ \u2265 1.\nSimilarly, J\u2212 = E n i=1 wiyi y \u2208 S\u221e , f(y) = 0 , and thus J\u2212 < 1.\nThis implies J\u2212 = J+ , which leads to a contradiction.\n2 Perhaps surprisingly, the converse of Theorem 2 also holds: Theorem 3.\nSuppose f : {0, 1}n \u2192 {0, 1} cannot be expressed as a weighted threshold function.\nThen there exists a prior distribution P for which the price of the security F does not converge to the value of f(x).\nProof: We start from a geometric characterization of weighted threshold functions.\nConsider the Boolean hypercube {0, 1}n as a set of points in n .\nIt is well known that f is expressible as a weighted threshold function iff there is a hyperplane in n that separates all the points at which f has value 0 from all the points at which f has value 1.\nNow, consider the sets H+ = Conv(f\u22121 (1)) and H\u2212 = Conv(f\u22121 (0)), where Conv(S) denotes the convex hull of S in n .\nH+ and H\u2212 are convex sets in n , and so, if they do not intersect, we can find a separating hyperlane between them.\nThis means that, if f is not expressible as a weighted threshold function, H+ and H\u2212 must intersect.\nIn this case, we show how to construct a prior P for which f(x) is not computed by the market.\nLet x\u2217 \u2208 n be a point in H+ \u2229 H\u2212 .\nBecause x\u2217 is in H+ , there exists some points z1 , z2 , ... , zm and constants \u03bb1, \u03bb2, ... , \u03bbm, such that the following constraints are satisfied: \u2200k zk \u2208 {0, 1}n , and f(zk ) = 1 \u2200k 0 < \u03bbk \u2264 1 m k=1 \u03bbk = 1 m k=1 \u03bbkzk = x\u2217 Similarly, because x\u2217 \u2208 H\u2212 , there are points y1 , y2 , ... , yl and constants \u00b51, \u00b52, ... , \u00b5l, such that \u2200j yj \u2208 {0, 1}n , and f(yj ) = 0 \u2200j 0 < \u00b5j \u2264 1 l j=1 \u00b5j = 1 l j=1 \u00b5j yj = x\u2217 We now define our prior distribution P as follows: P(zk ) = \u03bbk 2 for k = 1, 2, ... , m P(yj ) = \u00b5j 2 for j = 1, 2, ... , l, and all other points are assigned probability 0.\nIt is easy to see that this is a valid probability distribution.\nUnder this distribution P, first observe that P(f(x) = 1) = 1 2 .\nFurther, for any i such that 0 < x\u2217 i < 1, we have P(f(x) = 1|xi = 1) = P(f(x) = 1 \u2227 xi = 1) P(xi = 1) = x\u2217 i 2 x\u2217 i = 1 2 and P(f(x) = 1|xi = 0) = P(f(x) = 1 \u2227 xi = 0) P(xi = 0) = (1\u2212x\u2217 i ) 2 (1 \u2212 x\u2217 i ) = 1 2 For indices i such that x\u2217 i is 0 or 1 exactly, i``s private information reveals no additional information under prior P, and so here too we have P(f(x) = 1|xi = 0) = P(f(x) = 1|xi = 1) = 1 2 .\nHence, regardless of her private bit xi, each agent i will bid 0.5 for security F in the first round.\nThe clearing price of 0.5 also reveals no additional information, and so this is an equilibrium with price p\u221e = 0.5 that does not reveal the value of f(x).\n2 The XOR function is one example of a function that cannot be expressed as weighted threshold function; Example 1 illustrates Theorem 3 for this function.\n4.3 Convergence time bounds We have shown that the class of Boolean functions computable in our model is the class of weighted threshold functions.\nThe next natural question to ask is: How many rounds of trading are necessary before the equilibrium is reached?\nWe analyze this problem using the same simplified Shapley-Shubik model of market clearing in each round.\nWe first prove that, in the worst case, at most n rounds are required.\nThe idea of the proof is to consider the sequence of common knowledge sets \u2126 = S0 , S1 , ..., and show that, until the market reaches equilibrium, each set has a strictly lower dimension than the previous set.\nDefinition 4.\nFor a set S \u2286 {0, 1}n , the dimension of set S is the dimension of the smallest linear subspace of n that contains all the points in S; we use the notation dim(S) to denote it.\nLemma 1.\nIf Sr = Sr\u22121 , then dim(Sr ) < dim(Sr\u22121 ).\nProof: Let k = dim(Sr\u22121 ).\nConsider the bids in round r.\nIn our model, agent i will bid her current expectation for the value of F, br i = E(f(y) = 1|y \u2208 Sr\u22121 , yi = xi).\nThus, depending on the value of xi, br i will take on one of two values h (0) i or h (1) i .\nNote that h (0) i and h (1) i depend only on the set Sr\u22121 , which is common knowledge before round 161 r. Setting di = h (1) i \u2212 h (0) i , we can write br i = h (0) i + dixi.\nIt follows that the clearing price in round r is given by pr = 1 n n i=1 (h (0) i + dixi) (4) All the agents already know all the h (0) i and di values, and they observe the price pr at the end of the rth round.\nThus, they effectively have a linear equation in x1, x2, ... , xn that they use to improve their knowledge by ruling out any possibility that would not have resulted in price pr .\nIn other words, after r rounds, the common knowledge set Sr is the intersection of Sr\u22121 with the hyperplane defined by Equation (4).\nIt follows that Sr is contained in the intersection of this hyperplane with the k-dimension linear space containing Sr\u22121 .\nIf Sr is not equal to Sr\u22121 , this intersection defines a linear subspace of dimension (k \u2212 1) that contains Sr , and hence Sr has dimension at most (k \u2212 1).\n2 Theorem 4.\nLet f be a weighted threshold function, and let P be an arbitrary prior probability distribution.\nThen, after at most n rounds of trading, the price reaches its equilibrium value p\u221e = f(x).\nProof: Consider the sequence of common knowledge sets S0 , S1 , ..., and let r be the minimum index such that Sr = Sr\u22121 .\nThen, the rth round of trading does not improve any agent``s knowledge, and thus we must have S\u221e = Sr\u22121 and p\u221e = pr\u22121 .\nObserving that dim(S0 ) = n, and applying Lemma 1 to the first r \u2212 1 rounds, we must have (r \u2212 1) \u2264 n. Thus, the price reaches its equilibrium value within n rounds.\n2 Theorem 4 provides an upper bound of O(n) on the number of rounds required for convergence.\nWe now show that this bound is tight to within a factor of 2 by constructing a threshold function with 2n inputs and a prior distribution for which it takes n rounds to determine the value of f(x) in the worst case.\nThe functions we use are the carry-bit functions.\nThe function Cn takes 2n inputs; for convenience, we write the inputs as x1, x2 ... , xn, y1, y2, ... , yn or as a pair (x, y).\nThe function value is the value of the high-order carry bit when the binary numbers xnxn\u22121 \u00b7 \u00b7 \u00b7 x1 and ynyn\u22121 \u00b7 \u00b7 \u00b7 y1 are added together.\nIn weighted threshold form, this can be written as Cn(x, y) = 1 iff n i=1 xi + yi 2n+1\u2212i \u2265 1.\nFor this proof, let us call the agents A1, A2, ... , An, B1, B2, ... , Bn, where Ai holds input bit xi, and Bi holds input bit yi.\nWe first illustrate our technique by proving that computing C2 requires 2 rounds in the worst case.\nTo do this, we construct a common prior P2 as follows: \u2022 The pair (x1, y1) takes on the values (0, 0), (0, 1), (1, 0), (1, 1) uniformly (i.e., with probability 1 4 each).\n\u2022 We extend this to a distribution on (x1, x2, y1, y2) by specifying the conditional distribution of (x2, y2) given (x1, y1): If (x1, y1) = (1, 1), then (x2, y2) takes the values (0, 0), (0, 1), (1, 0), (1, 1) with probabilities 1 2 , 1 6 , 1 6 , 1 6 respectively.\nOtherwise, (x2, y2) takes the values (0, 0), (0, 1), (1, 0), (1, 1) with probabilities 1 6 , 1 6 , 1 6 , 1 2 respectively.\nNow, suppose x1 turns out to be 1, and consider agent A1``s bid in the first round.\nIt is given by b1 A1 = P(C2(x1, x2, y1, y2) = 1|x1 = 1)) = P(y1 = 1|x1 = 1) \u00b7 P((x2, y2) = (0, 0)|x1 = 1, y1 = 1) +P(y1 = 0|x1 = 1) \u00b7 P((x2, y2) = (1, 1)|x1 = 1, y1 = 0) = 1 2 \u00b7 1 2 + 1 2 \u00b7 1 2 = 1 2 On the other hand, if x1 turns out to be 0, agent A1``s bid would be given by b1 A1 = P(C2(x1, x2, y1, y2) = 1|x1 = 0)) = P((x2, y2) = (1, 1)|x1 = 0) = 1 2 Thus, irrespective of her bit, A1 will bid 0.5 in the first round.\nNote that the function and distribution are symmetric between x and y, and so the same argument shows that B1 will also bid 0.5 in the first round.\nThus, the price p1 announced at the end of the first round reveals no information about x1 or y1.\nThe reason this occurs is that, under this distribution, the second carry bit C2 is statistically independent of the first carry bit (x1 \u2227 y1); we will use this trick again in the general construction.\nNow, suppose that (x2, y2) is either (0, 1) or (1, 0).\nThen, even if x2 and y2 are completely revealed by the first-round price, the value of C2(x1, x2, y1, y2) is not revealed: It will be 1 if x1 = y1 = 1 and 0 otherwise.\nThus, we have shown that at least 2 rounds of trading will be required to reveal the function value in this case.\nWe now extend this construction to show by induction that the function Cn takes n rounds to reach an equilibrium in the worst case.\nTheorem 5.\nThere is a function Cn with 2n inputs and a prior distribution Pn such that, in the worst case, the market takes n rounds to reveal the value of Cn(\u00b7).\nProof: We prove the theorem by induction on n.\nThe base case for n = 2 has already been shown to be true.\nStarting from the distribution P2 described above, we construct the distributions P3, P4, ... , Pn by inductively applying the following rule: \u2022 Let x\u2212n denote the vector (x1, x2, ... , xn\u22121), and define y\u2212n similarly.\nWe extend the distribution Pn\u22121 on (x\u2212n , y\u2212n ) to a distribution Pn on (x, y) by specifying the conditional distribution of (xn, yn) given (x\u2212n , y\u2212n ): If Cn\u22121(x\u2212n , y\u2212n ) = 1, then (xn, yn) takes the values (0, 0), (0, 1), (1, 0), (1, 1) with probabilities 1 2 , 1 6 , 1 6 , 1 6 respectively.\nOtherwise, (xn, yn) takes the values (0, 0), (0, 1), (1, 0), (1, 1) with probabilities 1 6 , 1 6 , 1 6 , 1 2 respectively.\nClaim: Under distribution Pn, for all i < n, P(Cn(x, y) = 1|xi = 1) = P(Cn(x, y) = 1|xi = 0).\n162 Proof of claim: A similar calculation to that used for C2 above shows that the value of Cn(x, y) under this distribution is statistically independent of Cn\u22121(x\u2212n , y\u2212n ).\nFor i < n, xi can affect the value of Cn only through Cn\u22121.\nAlso, by contruction of Pn, given the value of Cn\u22121, the distribution of Cn is independent of xi.\nIt follows that Cn(x, y) is statistically independent of xi as well.\nOf course, a similar result holds for yi by symmetry.\nThus, in the first round, for all i = 1, 2, ... , n \u2212 1, the bids of agents Ai and Bi do not reveal anything about their private information.\nThus, the first-round price does not reveal any information about the value of (x\u2212n , y\u2212n ).\nOn the other hand, agents An and Bn do have different expectations of Cn(x) depending on whether their input bit is a 0 or a 1; thus, the first-round price does reveal whether neither, one, or both of xn and yn are 1.\nNow, consider a situation in which (xn, yn) takes on the value (1, 0) or (0, 1).\nWe show that, in this case, after one round we are left with the residual problem of computing the value of Cn\u22121(x\u2212n , y\u2212n ) under the prior Pn\u22121.\nClearly, when xn + yn = 1, Cn(x, y) = Cn\u22121(x\u2212n , y\u2212n ).\nFurther, according to the construction of Pn, the event (xn+ yn = 1) has the same probability (1/3) for all values of (x\u2212n , y\u2212n ).\nThus, conditioning on this fact does not alter the probability distribution over (x\u2212n , y\u2212n ); it must still be Pn\u22121.\nFinally, the inductive assumption tells us that solving this residual problem will take at least n \u2212 1 more rounds in the worst case and hence that finding the value of Cn(x, y) takes at least n rounds in the worst case.\n2 5.\nDISCUSSION Our results have been derived in a simplified model of an information market.\nIn this section, we discuss the applicability of these results to more general trading models.\nAssuming that agents bid truthfully, Theorem 2 holds in any model in which the price is a known stochastically monotone aggregate of agents'' bids.\nWhile it seems reasonable that the market price satisfies monotonicity properties, the exact form of the aggregate function may not be known if the volume of each user``s trades is not observable; this depends on the details of the market process.\nTheorem 3 and Theorem 5 hold more generally; they only require that an agent``s strategy depends only on her conditional expectation of the security``s value.\nPerhaps the most fragile result is Theorem 4, which relies on the linear form of the Shapley-Shubik clearing price (in addition to the conditions for Theorem 2); however, it seems plausible that a similar dimension-based bound will hold for other families of nonlinear clearing prices.\nUp to this point, we have described the model with the same number of agents as bits of information.\nHowever, all the results hold even if there is competition in the form of a known number of agents who know each bit of information.\nIndeed, modeling such competition may help alleviate the strategic problems in our current model.\nAnother interesting approach to addressing the strategic issue is to consider alternative markets that are at least myopically incentive compatible.\nOne example is a market mechanism called a market scoring rule, suggested by Hanson [12].\nThese markets have the property that a riskneutral agent``s best myopic strategy is to truthfully bid her current expected value of the security.\nAdditionally, the number of securities involved in each trade is fixed and publicly known.\nIf the market structure is such that, for example, the current scoring rule is posted publicly after each agent``s trade, then in equilibrium there is common knowledge of all agents'' expectation, and hence Theorem 2 holds.\nTheorem 3 also applies in this case, and hence we have the same characterization for the set of computable Boolean functions.\nThis suggests that the problem of eliciting truthful responses may be orthogonal to the problem of computing the desired aggregate, reminiscent of the revelation principle [18].\nIn this paper, we have restricted our attention to the simplest possible aggregation problem: computing Boolean functions of Boolean inputs.\nThe proofs of Theorems 3 and 5 also hold if we consider Boolean functions of real inputs, where each agent``s private information is a real number.\nFurther, Theorem 2 also holds provided the market reaches equilibrium.\nWith real inputs and arbitrary prior distributions, however, it is not clear that the market will reach an equilibrium in a finite number of steps.\n6.\nCONCLUSION 6.1 Summary We have framed the process of information aggregation in markets as a computation on distributed information.\nWe have developed a simplified model of an information market that we believe captures many of the important aspects of real agent interaction in an information market.\nWithin this model, we prove several results characterizing precisely what the market can compute and how quickly.\nSpecifically, we show that the market is guaranteed to converge to the true rational expectations equilibrium if and only if the security payoff function is a weighted threshold function.\nWe prove that the process whereby agents reveal their information over time and learn from the resulting announced prices takes at most n rounds to converge to the correct full-information price in the worst case.\nWe show that this bound is tight within a factor of two.\n6.2 Future work We view this paper as a first step towards understanding the computational power of information markets.\nSome interesting and important next steps include gaining a better understanding of the following: \u2022 The effect of price accuracy and precision: We have assumed that the clearing price is known with unlimited precision; in practice, this will not be true.\nFurther, we have neglected influences on the market price other than from rational traders; the market price may also be influenced by other factors such as misinformed or irrational traders.\nIt is interesting to ask what aggregates can be computed even in the presence of noisy prices.\n\u2022 Incremental updates: If the agents have computed the value of the function and a small number of input bits are switched, can the new value of the function be computed incrementally and quickly?\n\u2022 Distributed computation: In our model, distributed information is aggregated through a centralized market 163 computation.\nIn a sense, some of the computation itself is distributed among the participating agents, but can the market computation also be distributed?\nFor example, can we find a good distributed-computational model of a decentralized market?\n\u2022 Agents'' computation: We have not accounted for the complexity of the computations that agents must do to accurately update their beliefs after each round.\n\u2022 Strategic market models: For reasons of simplicity and tractability, we have directly assumed that agents bid truthfully.\nA more satisfying approach would be to assume only rationality and solve for the resulting gametheoretic solution strategy, either in our current computational model or another model of an information market.\n\u2022 The common-prior assumption: Can we say anything about the market behavior when agents'' priors are only approximately the same or when they differ greatly?\n\u2022 Average-case analysis: Our negative results (Theorems 3 and 5) examine worst-case scenarios, and thus involve very specific prior probability distributions.\nIt is interesting to ask whether we would get very different results for generic prior distributions.\n\u2022 Information market design: Non-threshold functions can be implemented by layering two or more threshold functions together.\nWhat is the minimum number of threshold securities required to implement a given function?\nThis is exactly the problem of minimizing the size of a neural network, a well-studied problem known to be NP-hard [15].\nWhat configuration of securities can best approximate a given function?\nAre there ways to define and configure securities to speed up convergence to equilibrium?\nWhat is the relationship between machine learning (e.g., neural-network learning) and information-market design?\nAcknowledgments We thank Joe Kilian for many helpful discussions.\nWe thank Robin Hanson and the anonymous reviewers for useful insights and pointers.\n7.\nREFERENCES [1] K. J. Arrow.\nThe role of securities in the optimal allocation of risk-bearing.\nReview of Economic Studies, 31(2):91-96, 1964.\n[2] J. Bergin and A. Brandenburger.\nA simple characterization of stochastically monotone functions.\nEconometrica, 58(5):1241-1243, Sept. 1990.\n[3] S. Debnath, D. M. Pennock, C. L. Giles, and S. Lawrence.\nInformation incorporation in online in-game sports betting markets.\nIn Proceedings of the Fourth Annual ACM Conference on Electronic Commerce (EC``03), June 2003.\n[4] P. Dubey, J. Geanakoplos, and M. Shubik.\nThe revelation of information in strategic market games: A critique of rational expectations equilibrium.\nJournal of Mathematical Economics, 16:105-137, 1987.\n[5] R. Fagin, J. Y. Halpern, Y. Moses, and M. Y. Vardi.\nReasoning About Knowledge.\nMIT Press, Cambridge, MA, 1996.\n[6] R. Forsythe and R. Lundholm.\nInformation aggregation in an experimental market.\nEconometrica, 58(2):309-347, 1990.\n[7] R. Forsythe, F. Nelson, G. R. Neumann, and J. Wright.\nAnatomy of an experimental political stock market.\nAmerican Economic Review, 82(5):1142-1161, 1992.\n[8] R. Forsythe, T. A. Rietz, and T. W. Ross.\nWishes, expectations, and actions: A survey on price formation in election stock markets.\nJournal of Economic Behavior and Organization, 39:83-110, 1999.\n[9] J. M. Gandar, W. H. Dare, C. R. Brown, and R. A. Zuber.\nInformed traders and price variations in the betting market for professional basketball games.\nJournal of Finance, LIII(1):385-401, 1998.\n[10] J. Geanakoplos and H. Polemarchakis.\nWe can``t disagree forever.\nJournal of Economic Theory, 28(1):192-200, 1982.\n[11] S. J. Grossman.\nAn introduction to the theory of rational expectations under asymmetric information.\nReview of Economic Studies, 48(4):541-559, 1981.\n[12] R. Hanson.\nCombinatorial information market design.\nInformation Systems Frontiers, 5(1), 2002.\n[13] M. Jackson and J. Peck.\nAsymmetric information in a strategic market game: Reexamining the implications of rational expectations.\nEconomic Theory, 13:603-628, 1999.\n[14] J. C. Jackwerth and M. Rubinstein.\nRecovering probability distributions from options prices.\nJournal of Finance, 51(5):1611-1631, Dec. 1996.\n[15] J.-H.\nLin and J. S. Vitter.\nComplexity results on learning by neural nets.\nMachine Learning, 6:211-230, 1991.\n[16] R. E. Lucas.\nExpectations and the neutrality of money.\nJournal of Economic Theory, 4(2):103-24, 1972.\n[17] M. Magill and M. Quinzii.\nTheory of Incomplete Markets, Vol.\n1.\nMIT Press, 1996.\n[18] A. Mas-Colell, M. D. Whinston, and J. R. Green.\nMicroeconomic Theory.\nOxford University Press, New York, 1995.\n[19] R. D. McKelvey and T. Page.\nCommon knowledge, consensus, and aggregate information.\nEconometrica, 54(1):109-127, 1986.\n[20] P. Milgrom and N. Stokey.\nInformation, trade, and common knowledge.\nJournal of Economic Theory, 26:17-27, 1982.\n[21] L. T. Nielsen, A. Brandenburger, J. Geanakoplos, R. McKelvey, and T. Page.\nCommon knowledge of an aggregate of expectations.\nEconometrica, 58(5):1235-1238, 1990.\n[22] D. M. Pennock, S. Debnath, E. J. Glover, and C. L. Giles.\nModeling information incorporation in markets, with application to detecting and explaining events.\nIn Proceedings of the Eighteenth Conference on Uncertainty in Artificial Intelligence, 2002.\n164 [23] D. M. Pennock, S. Lawrence, C. L. Giles, and F. \u02daA.\nNielsen.\nThe real power of artificial markets.\nScience, 291:987-988, February 2001.\n[24] D. M. Pennock, S. Lawrence, F. \u02daA.\nNielsen, and C. L. Giles.\nExtracting collective probabilistic forecasts from web games.\nIn Proceedings of the 7th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 174-183, 2001.\n[25] C. R. Plott and S. Sunder.\nRational expectations and the aggregation of diverse information in laboratory security markets.\nEconometrica, 56(5):1085-1118, 1988.\n[26] C. R. Plott, J. Wit, and W. C. Yang.\nParimutuel betting markets as information aggregation devices: Experimental results.\nTechnical Report Social Science Working Paper 986, California Institute of Technology, Apr. 1997.\n[27] C. Schmidt and A. Werwatz.\nHow accurate do markets predict the outcome of an event?\nthe Euro 2000 soccer championships experiment.\nTechnical Report 09-2002, Max Planck Institute for Research into Economic Systems, 2002.\n[28] L. Shapley and M. Shubik.\nTrade using one commodity as a means of payment.\nJournal of Political Economy, 85:937-968, 1977.\n[29] Y. Shoham and M. Tennenholtz.\nRational computation and the communication complexity of auctions.\nGames and Economic Behavior, 35(1-2):197-211, 2001.\n[30] R. H. Thaler and W. T. Ziemba.\nAnomalies: Parimutuel betting markets: Racetracks and lotteries.\nJournal of Economic Perspectives, 2(2):161-174, 1988.\n[31] H. R. Varian.\nThe arbitrage principle in financial economics.\nJournal of Economic Perspectives, 1(2):55-72, 1987.\n165", "lvl-3": "Computation in a Distributed Information Market \u2217\nABSTRACT\nAccording to economic theory -- supported by empirical and laboratory evidence -- the equilibrium price of a financial security reflects all of the information regarding the security 's value .\nWe investigate the computational process on the path toward equilibrium , where information distributed among traders is revealed step-by-step over time and incorporated into the market price .\nWe develop a simplified model of an information market , along with trading strategies , in order to formalize the computational properties of the process .\nWe show that securities whose payoffs can not be expressed as weighted threshold functions of distributed input bits are not guaranteed to converge to the proper equilibrium predicted by economic theory .\nOn the other hand , securities whose payoffs are threshold functions are guaranteed to converge , for all prior probability distributions .\nMoreover , these threshold securities converge in at most n rounds , where n is the number of bits of distributed information .\nWe also prove a lower bound , showing a type of threshold security that requires at least n/2 rounds to converge in the worst case .\n\u2217 This work was supported by the DoD University Research Initiative ( URI ) administered by the Office of Naval Research under Grant N00014-01-1-0795 .\n\u2020 Supported in part by ONR grant N00014-01-0795 and NSF grants CCR-0105337 , CCR-TC-0208972 , ANI-0207399 , and ITR-0219018 .\n\u2021 This work conducted while at NEC Laboratories America , Princeton , NJ .\n1 .\nINTRODUCTION\nThe strong form of the efficient markets hypothesis states that market prices nearly instantly incorporate all information available to all traders .\nAs a result , market prices encode the best forecasts of future outcomes given all information , even if that information is distributed across many sources .\nSupporting evidence can be found in empirical studies of options markets [ 14 ] , political stock markets [ 7 , 8 , 22 ] , sports betting markets [ 3 , 9 , 27 ] , horse-racing markets [ 30 ] , market games [ 23 , 24 ] , and laboratory investigations of experimental markets [ 6 , 25 , 26 ] .\nThe process of information incorporation is , at its essence , a distributed computation .\nEach trader begins with his or her own information .\nAs trades are made , summary information is revealed through market prices .\nTraders learn or infer what information others are likely to have by observing prices , then update their own beliefs based on their observations .\nOver time , if the process works as advertised , all information is revealed , and all traders converge to the same information state .\nAt this point , the market is in what is called a rational expectations equilibrium [ 11 , 16 , 19 ] .\nAll information available to all traders is now reflected in the going prices , and no further trades are desirable until some new information becomes available .\nWhile most markets are not designed with information aggregation as a primary motivation -- for example , derivatives\nmarkets are intended mainly for risk management and sports betting markets for entertainment -- recently , some markets have been created solely for the purpose of aggregating information on a topic of interest .\nThe Iowa Electronic Market1 is a prime example , operated by the University of Iowa Tippie College of Business for the purpose of investigating how information about political elections distributed among traders gets reflected in securities prices whose payoffs are tied to actual election outcomes [ 7 , 8 ] .\nIn this paper , we investigate the nature of the computational process whereby distributed information is revealed and combined over time into the prices in information markets .\nTo do so , in Section 3 , we propose a model of an information market that is tractable for theoretical analysis and , we believe , captures much of the important essence of real information markets .\nIn Section 4 , we present our main theoretical results concerning this model .\nWe prove that only Boolean securities whose payoffs can be expressed as threshold functions of the distributed input bits of information are guaranteed to converge as predicted by rational expectations theory .\nBoolean securities with more complex payoffs may not converge under some prior distributions .\nWe also provide upper and lower bounds on the convergence time for these threshold securities .\nWe show that , for all prior distributions , the price of a threshold security converges to its rational expectations equilibrium price in at most n rounds , where n is the number of bits of distributed information .\nWe show that this worst-case bound is tight within a factor of two by illustrating a situation in which a threshold security requires n/2 rounds to converge .\n2 .\nRELATIONSHIP TO RELATED WORK\n3 .\nMODEL OF AN INFORMATION MARKET\n3.1 Initial information state\n3.2 Market mechanism\n5\n3.3 Agent strategies\n4 .\nCOMPUTATIONAL PROPERTIES\n4.1 Equilibrium price characterization\n4.2 Characterizing computable aggregates\nProof :\n4.3 Convergence time bounds\n5 .\nDISCUSSION\n6 .\nCONCLUSION 6.1 Summary\n6.2 Future work", "lvl-4": "Computation in a Distributed Information Market \u2217\nABSTRACT\nAccording to economic theory -- supported by empirical and laboratory evidence -- the equilibrium price of a financial security reflects all of the information regarding the security 's value .\nWe investigate the computational process on the path toward equilibrium , where information distributed among traders is revealed step-by-step over time and incorporated into the market price .\nWe develop a simplified model of an information market , along with trading strategies , in order to formalize the computational properties of the process .\nWe show that securities whose payoffs can not be expressed as weighted threshold functions of distributed input bits are not guaranteed to converge to the proper equilibrium predicted by economic theory .\nOn the other hand , securities whose payoffs are threshold functions are guaranteed to converge , for all prior probability distributions .\nMoreover , these threshold securities converge in at most n rounds , where n is the number of bits of distributed information .\nWe also prove a lower bound , showing a type of threshold security that requires at least n/2 rounds to converge in the worst case .\n\u2217 This work was supported by the DoD University Research Initiative ( URI ) administered by the Office of Naval Research under Grant N00014-01-1-0795 .\n\u2020 Supported in part by ONR grant N00014-01-0795 and NSF grants CCR-0105337 , CCR-TC-0208972 , ANI-0207399 , and ITR-0219018 .\n\u2021 This work conducted while at NEC Laboratories America , Princeton , NJ .\n1 .\nINTRODUCTION\nThe strong form of the efficient markets hypothesis states that market prices nearly instantly incorporate all information available to all traders .\nAs a result , market prices encode the best forecasts of future outcomes given all information , even if that information is distributed across many sources .\nThe process of information incorporation is , at its essence , a distributed computation .\nEach trader begins with his or her own information .\nAs trades are made , summary information is revealed through market prices .\nTraders learn or infer what information others are likely to have by observing prices , then update their own beliefs based on their observations .\nOver time , if the process works as advertised , all information is revealed , and all traders converge to the same information state .\nAt this point , the market is in what is called a rational expectations equilibrium [ 11 , 16 , 19 ] .\nAll information available to all traders is now reflected in the going prices , and no further trades are desirable until some new information becomes available .\nWhile most markets are not designed with information aggregation as a primary motivation -- for example , derivatives\nIn this paper , we investigate the nature of the computational process whereby distributed information is revealed and combined over time into the prices in information markets .\nTo do so , in Section 3 , we propose a model of an information market that is tractable for theoretical analysis and , we believe , captures much of the important essence of real information markets .\nWe prove that only Boolean securities whose payoffs can be expressed as threshold functions of the distributed input bits of information are guaranteed to converge as predicted by rational expectations theory .\nBoolean securities with more complex payoffs may not converge under some prior distributions .\nWe also provide upper and lower bounds on the convergence time for these threshold securities .\nWe show that , for all prior distributions , the price of a threshold security converges to its rational expectations equilibrium price in at most n rounds , where n is the number of bits of distributed information .\nWe show that this worst-case bound is tight within a factor of two by illustrating a situation in which a threshold security requires n/2 rounds to converge .", "lvl-2": "Computation in a Distributed Information Market \u2217\nABSTRACT\nAccording to economic theory -- supported by empirical and laboratory evidence -- the equilibrium price of a financial security reflects all of the information regarding the security 's value .\nWe investigate the computational process on the path toward equilibrium , where information distributed among traders is revealed step-by-step over time and incorporated into the market price .\nWe develop a simplified model of an information market , along with trading strategies , in order to formalize the computational properties of the process .\nWe show that securities whose payoffs can not be expressed as weighted threshold functions of distributed input bits are not guaranteed to converge to the proper equilibrium predicted by economic theory .\nOn the other hand , securities whose payoffs are threshold functions are guaranteed to converge , for all prior probability distributions .\nMoreover , these threshold securities converge in at most n rounds , where n is the number of bits of distributed information .\nWe also prove a lower bound , showing a type of threshold security that requires at least n/2 rounds to converge in the worst case .\n\u2217 This work was supported by the DoD University Research Initiative ( URI ) administered by the Office of Naval Research under Grant N00014-01-1-0795 .\n\u2020 Supported in part by ONR grant N00014-01-0795 and NSF grants CCR-0105337 , CCR-TC-0208972 , ANI-0207399 , and ITR-0219018 .\n\u2021 This work conducted while at NEC Laboratories America , Princeton , NJ .\n1 .\nINTRODUCTION\nThe strong form of the efficient markets hypothesis states that market prices nearly instantly incorporate all information available to all traders .\nAs a result , market prices encode the best forecasts of future outcomes given all information , even if that information is distributed across many sources .\nSupporting evidence can be found in empirical studies of options markets [ 14 ] , political stock markets [ 7 , 8 , 22 ] , sports betting markets [ 3 , 9 , 27 ] , horse-racing markets [ 30 ] , market games [ 23 , 24 ] , and laboratory investigations of experimental markets [ 6 , 25 , 26 ] .\nThe process of information incorporation is , at its essence , a distributed computation .\nEach trader begins with his or her own information .\nAs trades are made , summary information is revealed through market prices .\nTraders learn or infer what information others are likely to have by observing prices , then update their own beliefs based on their observations .\nOver time , if the process works as advertised , all information is revealed , and all traders converge to the same information state .\nAt this point , the market is in what is called a rational expectations equilibrium [ 11 , 16 , 19 ] .\nAll information available to all traders is now reflected in the going prices , and no further trades are desirable until some new information becomes available .\nWhile most markets are not designed with information aggregation as a primary motivation -- for example , derivatives\nmarkets are intended mainly for risk management and sports betting markets for entertainment -- recently , some markets have been created solely for the purpose of aggregating information on a topic of interest .\nThe Iowa Electronic Market1 is a prime example , operated by the University of Iowa Tippie College of Business for the purpose of investigating how information about political elections distributed among traders gets reflected in securities prices whose payoffs are tied to actual election outcomes [ 7 , 8 ] .\nIn this paper , we investigate the nature of the computational process whereby distributed information is revealed and combined over time into the prices in information markets .\nTo do so , in Section 3 , we propose a model of an information market that is tractable for theoretical analysis and , we believe , captures much of the important essence of real information markets .\nIn Section 4 , we present our main theoretical results concerning this model .\nWe prove that only Boolean securities whose payoffs can be expressed as threshold functions of the distributed input bits of information are guaranteed to converge as predicted by rational expectations theory .\nBoolean securities with more complex payoffs may not converge under some prior distributions .\nWe also provide upper and lower bounds on the convergence time for these threshold securities .\nWe show that , for all prior distributions , the price of a threshold security converges to its rational expectations equilibrium price in at most n rounds , where n is the number of bits of distributed information .\nWe show that this worst-case bound is tight within a factor of two by illustrating a situation in which a threshold security requires n/2 rounds to converge .\n2 .\nRELATIONSHIP TO RELATED WORK\nAs mentioned , there is a great deal of documented evidence supporting the notion that markets are able to aggregate information in a number of scenarios using a variety of market mechanisms .\nThe theoretically ideal mechanism requires what is called a complete market .\nA complete market contains enough linearly independent securities to span the entire state space of interest [ 1 , 31 ] .\nThat is , the dimensionality of the available securities equals the dimensionality of the event space over which information is to be aggregated .2 In this ideal case , all private information becomes common knowledge in equilibrium , and thus any function of the private information can be directly evaluated by any agent or observer .\nHowever , this theoretical ideal is almost never achievable in practice , because it generally requires a number of securities exponential in the number of random variables of interest .\nWhen available securities form an incomplete market [ 17 ] in relation to the desired information space -- as is usually the case -- aggregation may be partial .\nNot all private information is revealed in equilibrium , and prices may not convey enough information to recover the complete joint probability distribution over all events .\nStill , it is generally assumed that aggregation does occur along the dimensions represented in the market ; that is , prices do reflect a consistent projection of the entire joint distribution onto the smaller-dimensional space spanned by securities .\nIn this pa\nper , we investigate cases in which even this partial aggregation fails .\nFor example , even though there is enough private information to determine completely the price of a security in the market , the equilibrium price may in fact reveal no information at all !\nSo characterizations of when a rational expectations equilibrium is fully revealing do not immediately apply to our problem .\nWe are not asking whether all possible functions of private information can be evaluated , but whether a particular target function can be evaluated .\nWe show that properties of the function itself play a major role , not just the relative dimensionalities of the information and security spaces .\nOur second main contribution is examining the dynamics of information aggregation before equilibrium , in particular proving upper and lower bounds on the time to convergence in those cases in which aggregation succeeds .\nShoham and Tennenholtz [ 29 ] define a rationally computable function as a function of agents ' valuations ( types ) that can be computed by a market , assuming agents follow rational equilibrium strategies .\nThe authors mainly consider auctions of goods as their basic mechanistic unit and examine the communication complexity involved in computing various functions of agents ' valuations of goods .\nFor example , they give auction mechanisms that can compute the maximum , minimum , and kth-highest of the agents ' valuations of a single good using 1 , 1 , and n \u2212 k + 1 bits of communication , respectively .\nThey also examine the potential tradeoff between communication complexity and revenue .\n3 .\nMODEL OF AN INFORMATION MARKET\nTo investigate the properties and limitations of the process whereby an information market converges toward its rational-expectations equilibrium , we formulate a representative model of the market .\nIn designing the model , our goals were two-fold : ( 1 ) to make the model rich enough to be realistic and ( 2 ) to make the model simple enough to admit meaningful analysis .\nAny modeling decisions must trade off these two generally conflicting goals , and the decision process is as much an art as a science .\nNonetheless , we believe that our model captures enough of the essence of real information markets to lend credence to the results that follow .\nIn this section , we present our modeling assumptions and justifications in detail .\nSection 3.1 describes the initial information state of the system , Section 3.2 covers the market mechanism , and Section 3.3 presents the agents ' strategies .\n3.1 Initial information state\nThere are n agents ( traders ) in the system , each of whom is privy to one bit of information , denoted xi .\nThe vector of all n bits is denoted x = ( x1 , x2 , ... , xn ) .\nIn the initial state , each agent is aware only of her own bit of information .\nAll agents have a common prior regarding the joint distribution of bits among agents , but none has any specific information about the actual value of bits held by others .\nNote that this common-prior assumption -- typical in the economics literature -- does not imply that all agents agree .\nTo the contrary , because each agent has different information , the initial state of the system is in general a state of disagreement .\nNearly any disagreement that could be modeled by assuming different priors can instead be mod\neled by assuming a common prior with different information , and so the common-prior assumption is not as severe as it may seem .\n3.2 Market mechanism\nThe security being traded by the agents is a financial instrument whose payoff is a function f ( X ) of the agents ' bits .\nThe form of f ( the description of the security ) is common knowledge3 among agents .\nWe sometimes refer to the xi as the input bits .\nAt some time in the future after trading is completed , the true value of f ( X ) is revealed ,4 and every owner of the security is paid an amount f ( X ) in cash per unit owned .\nIf an agent ends with a negative quantity of the security ( by selling short ) , then the agent must pay the amount f ( X ) in cash per unit .\nNote that if someone were to have complete knowledge of all input bits X , then that person would know the true value f ( X ) of the security with certainty , and so would be willing to buy it at any price lower than f ( X ) and ( short ) sell it at any price higher than\n5\nFollowing Dubey , Geanakoplos , and Shubik [ 4 ] , and Jackson and Peck [ 13 ] , we model the market-price formation process as a multiperiod Shapley-Shubik market game [ 28 ] .\nThe Shapley-Shubik process operates as follows : The market proceeds in synchronous rounds .\nIn each round , each agent i submits a bid bi and a quantity qi .\nThe semantics are that agent i is supplying a quantity qi of the security and an amount bi of money to be traded in the market .\nFor simplicity , we assume that there are no restrictions on credit or short sales , and so an agent 's trade is not constrained by her possessions .\nThe market clears in each round by settling at a single price that balances the trade in that round : The clearing price is p = Ei bi / Ei qi .\nAt the end of the round , agent i holds a quantity q ' i proportional to the money she bid : q ' i = bi/p .\nIn addition , she is left with an amount of money b ' i that reflects her net trade at price p : b ' i = bi \u2212 p ( q ' i \u2212 qi ) = pqi .\nNote that agent i 's net trade in the security is a purchase if p < bi/qi and a sale if p > bi/qi .\nAfter each round , the clearing price p is publicly revealed .\nAgents then revise their beliefs according to any information garnered from the new price .\nThe next round proceeds as the previous .\nThe process continues until an equilibrium is reached , meaning that prices and bids do not change from one round to the next .\nIn this paper , we make a further simplifying restriction on the trading in each round : We assume that qi = 1 for each agent i .\nThis modeling assumption serves two analytical purposes .\nFirst , it ensures that there is forced trade in every round .\nClassic results in economics show that perfectly rational and risk-neutral agents will never trade with each other for purely speculative reasons ( even if they have differing information ) [ 20 ] .\nThere are many factors that can induce rational agents to trade , such as differing degrees of risk aversion , the presence of other traders who are trading for liquidity reasons rather than speculative gain , or a market maker who is pumping money into the market through a subsidy .\nWe sidestep this issue by simply assuming that the\ninformed agents will trade ( for unspecified reasons ) .\nSecond , forcing qi = 1 for all i means that the total volume of trade and the impact of any one trader on the clearing price are common knowledge ; the clearing price p is a simple function of the agents ' bids , p = Ei bi/n .\nWe will discuss the implications of alternative market models in Section 5 .\n3.3 Agent strategies\nIn order to draw formal conclusions about the price evolution process , we need to make some assumptions about how agents behave .\nEssentially we assume that agents are riskneutral , myopic ,6 and bid truthfully : Each agent in each round bids his or her current valuation of the security , which is that agent 's estimation of the expected payoff of the security .\nExpectations are computed according to each agent 's probability distribution , which is updated via Bayes ' rule when new information ( revealed via the clearing prices ) becomes available .\nWe also assume that it is common knowledge that all the agents behave in the specified manner .\nWould rational agents actually behave according to this strategy ?\nIt 's hard to say .\nCertainly , we do not claim that this is an equilibrium strategy in the game-theoretic sense .\nFurthermore , it is clear that we are ignoring some legitimate tactics , e.g. , bidding falsely in one round in order to effect other agents ' judgments in the following rounds ( nonmyopic reasoning ) .\nHowever , we believe that the strategy outlined is a reasonable starting point for analysis .\nSolving for a true game-theoretic equilibrium strategy in this setting seems extremely difficult .\nOur assumptions seem reasonable when there are enough agents in the system such that extremely complex meta-reasoning is not likely to improve upon simply bidding one 's true expected value .\nIn this case , according the the Shapley-Shubik mechanism , if the clearing price is below an agent 's expected value that agent will end up buying ( increasing expected profit ) ; otherwise , if the clearing price is above the agent 's expected value , the agent will end up selling ( also increasing expected profit ) .\n4 .\nCOMPUTATIONAL PROPERTIES\nIn this section , we study the computational power of information markets for a very simple class of aggregation functions : Boolean functions of n variables .\nWe characterize the set of Boolean functions that can be computed in our market model for all prior distributions and then prove upper and lower bounds on the worst-case convergence time for these markets .\nThe information structure we assume is as follows : There are n agents , and each agent i has a single bit of private information xi .\nWe use X to denote the vector ( x1 , ... , xn ) of inputs .\nAll the agents also have a common prior probability distribution P : { 0 , 11n __ + [ 0 , 1 ] over the values of X .\nWe define a Boolean aggregate function f ( X ) : { 0 , 11n __ + { 0 , 11 that we would like the market to compute .\nNote that X , and hence f ( X ) , is completely determined by the combination of all the agents ' information , but it is not known to any one agent .\nThe agents trade in a Boolean security F , which pays off $ 1 if f ( X ) = 1 and $ 0 if f ( X ) = 0 .\nSo an omniscient 6Risk neutrality implies that each agent 's utility for the security is linearly related to his or her subjective estimation of the expected payoff of the security .\nMyopic behavior means that agents treat each round as if it were the final round : They do not reason about how their bids may affect the bids of other agents in future rounds .\nf ( X ) .\nagent with access to all the agents ' bits would know the true value of security F -- either exactly $ 1 or exactly $ 0 .\nIn reality , risk-neutral agents with limited information will value F according to their expectation of its payoff , or Ei [ f ( x ) ] , where Ei is the expectation operator applied according to agent i 's probability distribution .\nFor any function f , trading in F may happen to converge to the true value of f ( x ) by coincidence if the prior probability distribution is sufficiently degenerate .\nMore interestingly , we would like to know for which functions f does the price of the security F always converge to f ( x ) for all prior probability distributions P. 7 In Section 4.2 , we prove a necessary and sufficient condition that guarantees convergence .\nIn Section 4.3 , we address the natural follow-up question , by deriving upper and lower bounds on the worst-case number of rounds of trading required for the value of f ( x ) to be revealed .\n4.1 Equilibrium price characterization\nOur analysis builds on a characterization of the equilibrium price of F that follows from a powerful result on common knowledge of aggregates due to McKelvey and Page [ 19 ] , later extended by Nielsen et al. [ 21 ] .\nInformation markets aim to aggregate the knowledge of all the agents .\nProcedurally , this occurs because the agents learn from the markets : The price of the security conveys information to each agent about the knowledge of other agents .\nWe can model the flow of information through prices as follows .\nLet \u2126 = { 0 , 1 } n be the set of possible values of x ; we say that \u2126 denotes the set of possible `` states of the world . ''\nThe prior P defines everyone 's initial belief about the likelihood of each state .\nAs trading proceeds , some possible states can be logically ruled out , but the relative likelihoods among the remaining states are fully determined by the prior P .\nSo the common knowledge after any stage is completely described by the set of states that an external observer -- with no information beyond the sequence of prices observed -- considers possible ( along with the prior ) .\nSimilarly , the knowledge of agent i at any point is also completely described by the set of states she considers possible .\nWe use the notation Sr to denote the common-knowledge possibility set after round r , and Sri to denote the set of states that agent i considers possible after round r. Initially , the only common knowledge is that the input vector x is in \u2126 ; in other words , the set of states considered possible by an external observer before trading has occurred is the set S0 = \u2126 .\nHowever , each agent i also knows the value of her bit xi ; thus , her knowledge set S0i is the set { y E \u2126 | yi = xi } .\nAgent i 's first-round bid is her conditional expectation of the event f ( x ) = 1 given that x E S0i .\nAll the agents ' bids are processed , and the clearing price p1 is announced .\nAn external observer could predict agent i 's bid if he knew the value of xi .\nThus , if he knew the value of x , he could predict the value of p1 .\nIn other words , the external observer knows the function price1 ( x ) that relates the first round price to the true state x. Of course , he does not know the value of x ; however , he can rule out any vector x that would have resulted in a different clearing price from the observed price p1 .\n7We assume that the common prior is consistent with x in the sense that it assigns a non-zero probability to the actual value of x. Thus , the common knowledge after round 1 is the set S1 = { y E S0 | price1 ( y ) = p1 } .\nAgent i knows the common knowledge and , in addition , knows the value of bit xi .\nHence , after every round r , the knowledge of agent i is given by Sri = { y E Sr | yi = xi } .\nNote that , because knowledge can only improve over time , we must always have Sri C _ Sr-1 i and Sr C _ Sr-1 .\nThus , only a finite number of changes in each agent 's knowledge are possible , and so eventually we must converge to an equilibrium after which no player learns any further information .\nWe use S ' to denote the common knowledge at this point , and S ' i to denote agent i 's knowledge at this point .\nLet p ' denote the clearing price at equilibrium .\nInformally , McKelvey and Page [ 19 ] show that , if n people with common priors but different information about the likelihood of some event A agree about a `` suitable '' aggregate of their individual conditional probabilities , then their individual conditional probabilities of event A 's occurring must be identical .\n( The precise definition of `` suitable '' is described below . )\nThere is a strong connection to rational expectation equilibria in markets , which was noted in the original McKelvey-Page paper : The market price of a security is common knowledge at the point of equilibrium .\nThus , if the price is a `` suitable '' aggregate of the conditional expectations of all the agents , then in equilibrium they must have identical conditional expectations of the event that the security will pay off .\n( Note that their information may still be different . )\nBergin and Brandenburger [ 2 ] proved that this simple definition of stochastically monotone functions is equivalent to the original definition in McKelvey-Page [ 19 ] .\nDEFINITION 2 .\nA function g : Rn -- + R is called stochastically regular if it can be written in the form g = h o g ' , where g ' is stochastically monotone and h is invertible on the range of g ' .\nWe can now state the McKelvey-Page result , as generalized by Nielsen et al. [ 21 ] .\nIn our context , the following simple theorem statement suffices ; more general versions of this theorem can be found in [ 19 , 21 ] .\nTHEOREM 1 .\n( Nielsen et al. [ 21 ] ) Suppose that , at equilibrium , the n agents have a common prior , but possibly different information , about the value of a random variable F , as described above .\nFor all i , let p ' i = E ( F | x E S ' i ) .\nIf g is a stochastically regular function and g ( p ' 1 , p ' 2 , ... , p 'n ) is common knowledge , then it must be the case that\nIn one round of our simplified Shapley-Shubik trading model , the announced price is the mean of the conditional expectations of the n agents .\nThe mean is a stochastically regular function ; hence , Theorem 1 shows that , at equilibrium , all agents have identical conditional expectations of the payoff of the security .\nIt follows that the equilibrium\nprice p ' must be exactly the conditional expectations of all agents at equilibrium .\nTheorem 1 does not in itself say how the equilibrium is reached .\nMcKelvey and Page , extending an argument due to Geanakoplos and Polemarchakis [ 10 ] , show that repeated announcement of the aggregate will eventually result in common knowledge of the aggregate .\nIn our context , this is achieved by announcing the current price at the end of each round ; this will ultimately converge to a state in which all agents bid the same price p ' .\nHowever , reaching an equilibrium price is not sufficient for the purposes of information aggregation .\nWe also want the price to reveal the actual value of f ( x ) .\nIt is possible that the equilibrium price p ' of the security F will not be either 0 or 1 , and so we can not infer the value of f ( x ) from it .\nExample 1 : Consider two agents 1 and 2 with private input bits x1 and x2 respectively .\nSuppose the prior probability distribution is uniform , i.e. , x = ( x1 , x2 ) takes the values ( 0 , 0 ) , ( 0 , 1 ) , ( 1 , 0 ) , and ( 1 , 1 ) each with probability 14 .\nNow , suppose the aggregate function we want to compute is the XOR function , f ( x ) = x1 \u2295 x2 .\nTo this end , we design a market to trade in a Boolean security F , which will eventually payoff $ 1 iff x1 \u2295 x2 = 1 .\nIf agent 1 observes x1 = 1 , she estimates the expected value of F to be the probability that x2 = 0 ( given x1 = 1 ) , which is 21 .\nIf she observes x1 = 0 , her expectation of the value of F is the conditional probability that x2 = 1 , which is also 21 .\nThus , in either case , agent 1 will bid 0.5 for F in the first round .\nSimilarly , agent 2 will also always bid 0.5 in the first round .\nHence , the first round of trading ends with a clearing price of 0.5 .\nFrom this , agent 2 can infer that agent 1 bid 0.5 , but this gives her no information about the value of x1 -- it is still equally likely to be 0 or 1 .\nAgent 1 also gains no information from the first round of trading , and hence neither agent changes her bid in the following rounds .\nThus , the market reaches equilibrium at this point .\nAs predicted by Theorem 1 , both agents have the same conditional expectation ( 0.5 ) at equilibrium .\nHowever , the equilibrium price of the security F does not reveal the value of f ( x1 , x2 ) , even though the combination of agents ' information is enough to determine it precisely .\n4.2 Characterizing computable aggregates\nWe now give a necessary and sufficient characterization of the class of functions f such that , for any prior distribution on x , the equilibrium price of F will reveal the true value of f .\nWe show that this is exactly the class of weighted threshold functions :\nTHEOREM 2 .\nIf f is a weighted threshold function , then , for any prior probability distribution P , the equilibrium price of F is equal to f ( x ) .\nProof :\nLet S ' i denote the possibility set of agent i at equilibrium .\nAs before , we use p ' to denote the final trading price at this point .\nNote that , by Theorem 1 , p ' is exactly agent i 's conditional expectation of the value of f ( x ) , given her final possibility set S ' i .\nFirst , observe that if p ' is 0 or 1 , then we must have f ( x ) = p ' , regardless of the form of f. For instance , if p ' = 1 , this means that E ( f ( y ) | y \u2208 S ' ) = 1 .\nAs f ( \u00b7 ) can only take the values 0 or 1 , it follows that P ( f ( y ) = 1 | y \u2208 S ' ) = 1 .\nThe actual value x is always in the final possibility set S ' , and , furthermore , it must have non-zero prior probability , because it actually occurred .\nHence , it follows that f ( x ) = 1 in this case .\nAn identical argument shows that if p ' = 0 , f ( x ) = 0 .\nHence , it is enough to show that , if f is a weighted threshold function , then p ' is either 0 or 1 .\nWe prove this by contradiction .\nLet f ( \u00b7 ) be a weighted threshold function corresponding to weights { wi } , and assume that 0 < p ' < 1 .\nBy Theorem 1 , we must have :\nBecause by assumption p ' = 0 , 1 , both Ji + and Ji are well-defined ( for all i ) : Neither is conditioned on a zeroprobability event .\nClaim : Eqs .\n1 and 3 imply that Ji + = Ji - , for all i. Proof of claim : We consider the two cases xi = 1 and xi = 0 separately .\nCase ( i ) : xi = 1 .\nWe can assume that Ji and Ji + are not both 0 ( or else , the claim is trivially true ) .\nIn this case , we have\nCase ( ii ) : xi = 0 .\nWhen xi = 0 , observe that the argument of Case ( i ) can be used to prove that ( 1 \u2212 Ji + ) = ( 1 \u2212 Ji ) .\nIt immediately follows that Ji + = Ji as well .\n\u2737 Hence , we must also have J + = J - .\nBut using linearity of expectation , we can also write J + as\nand thus J \u2212 < 1 .\nThis implies J \u2212 = J + , which leads to a contradiction .\n\u2751 Perhaps surprisingly , the converse of Theorem 2 also holds : THEOREM 3 .\nSuppose f : { 0 , 1 } n \u2192 { 0 , 1 } can not be expressed as a weighted threshold function .\nThen there exists a prior distribution P for which the price of the security F does not converge to the value of f ( x ) .\nProof : We start from a geometric characterization of weighted threshold functions .\nConsider the Boolean hypercube { 0 , 1 } n as a set of points in ~ n .\nIt is well known that f is expressible as a weighted threshold function iff there is a hyperplane in ~ n that separates all the points at which f has value 0 from all the points at which f has value 1 .\nNow , consider the setsH + = Conv ( f \u2212 1 ( 1 ) ) and\nwhere Conv ( S ) denotes the convex hull of S in ~ n. H + and H \u2212 are convex sets in ~ n , and so , if they do not intersect , we can find a separating hyperlane between them .\nThis means that , if f is not expressible as a weighted threshold function , H + and H \u2212 must intersect .\nIn this case , we show how to construct a prior P for which f ( x ) is not computed by the market .\nLet x \u2217 \u2208 ~ n be a point in H + \u2229 H \u2212 .\nBecause x \u2217 is in H + , there exists some points z1 , z2 , ... , zm and constants \u03bb1 , \u03bb2 , ... , \u03bbm , such that the following constraints are satisfied :\nSimilarly , because x \u2217 \u2208 H \u2212 , there are points y1 , y2 , ... , yl and constants \u00b51 , \u00b52 , ... , \u00b5l , such that\nand all other points are assigned probability 0 .\nIt is easy to see that this is a valid probability distribution .\nUnder this distribution P , first observe that P ( f ( x ) = 1 ) = 21 .\nFurther , for any i such that 0 < x \u2217 i < 1 , we have\nFor indices i such that x \u2217 i is 0 or 1 exactly , i 's private information reveals no additional information under prior P , and so here too we have P ( f ( x ) = 1 | xi = 0 ) = P ( f ( x ) = 1 | xi = 1 ) = 21 .\nHence , regardless of her private bit xi , each agent i will bid 0.5 for security F in the first round .\nThe clearing price of 0.5 also reveals no additional information , and so this is an equilibrium with price p \u221e = 0.5 that does not reveal the value of f ( x ) .\n\u2751 The XOR function is one example of a function that can not be expressed as weighted threshold function ; Example 1 illustrates Theorem 3 for this function .\n4.3 Convergence time bounds\nWe have shown that the class of Boolean functions computable in our model is the class of weighted threshold functions .\nThe next natural question to ask is : How many rounds of trading are necessary before the equilibrium is reached ?\nWe analyze this problem using the same simplified Shapley-Shubik model of market clearing in each round .\nWe first prove that , in the worst case , at most n rounds are required .\nThe idea of the proof is to consider the sequence of common knowledge sets \u2126 = S0 , S1 , ... , and show that , until the market reaches equilibrium , each set has a strictly lower dimension than the previous set .\nSr \u2212 1 , then dim ( Sr ) < dim ( Sr \u2212 1 ) .\nProof : Let k = dim ( Sr \u2212 1 ) .\nConsider the bids in round r .\nIn our model , agent i will bid her current expectation for the value of F ,\non the set Sr \u2212 1 , which is common knowledge before round\nfollows that the clearing price in round r is given by\nAll the agents already know all the h ( 0 ) i and di values , and they observe the price pr at the end of the rth round .\nThus , they effectively have a linear equation in x1 , x2 , ... , xn that they use to improve their knowledge by ruling out any possibility that would not have resulted in price pr .\nIn other words , after r rounds , the common knowledge set Sr is the intersection of Sr-1 with the hyperplane defined by Equation ( 4 ) .\nIt follows that Sr is contained in the intersection of this hyperplane with the k-dimension linear space containing Sr-1 .\nIf Sr is not equal to Sr-1 , this intersection defines a linear subspace of dimension ( k \u2212 1 ) that contains Sr , and\nProof : Consider the sequence of common knowledge sets S0 , S1 , ... , and let r be the minimum index such that Sr = Sr-1 .\nThen , the rth round of trading does not improve any agent 's knowledge , and thus we must have S ' = Sr-1 and p ' = pr-1 .\nObserving that dim ( S0 ) = n , and applying Lemma 1 to the first r \u2212 1 rounds , we must have ( r \u2212 1 ) \u2264 n. Thus , the price reaches its equilibrium value within n rounds .\n\u2751 Theorem 4 provides an upper bound of O ( n ) on the number of rounds required for convergence .\nWe now show that this bound is tight to within a factor of 2 by constructing a threshold function with 2n inputs and a prior distribution for which it takes n rounds to determine the value of f ( x ) in the worst case .\nThe functions we use are the carry-bit functions .\nThe function Cn takes 2n inputs ; for convenience , we write the inputs as x1 , x2 ... , xn , y1 , y2 , ... , yn or as a pair ( x , y ) .\nThe function value is the value of the high-order carry bit when the binary numbers xnxn-1 \u00b7 \u00b7 \u00b7 x1 and ynyn-1 \u00b7 \u00b7 \u00b7 y1 are added together .\nIn weighted threshold form , this can be written as\nFor this proof , let us call the agents A1 , A2 , ... , An , B1 , B2 , ... , Bn , where Ai holds input bit xi , and Bi holds input bit yi .\nWe first illustrate our technique by proving that computing C2 requires 2 rounds in the worst case .\nTo do this , we construct a common prior P2 as follows :\n\u2022 The pair ( x1 , y1 ) takes on the values ( 0 , 0 ) , ( 0 , 1 ) , ( 1 , 0 ) , ( 1 , 1 ) uniformly ( i.e. , with probability 14 each ) .\n\u2022 We extend this to a distribution on ( x1 , x2 , y1 , y2 ) by specifying the conditional distribution of ( x2 , y2 ) given ( x1 , y1 ) : If ( x1 , y1 ) = ( 1 , 1 ) , then ( x2 , y2 ) takes the values ( 0 , 0 ) , ( 0 , 1 ) , ( 1 , 0 ) , ( 1 , 1 ) with probabilities 21 , 61 ,\n( 0 , 0 ) , ( 0 , 1 ) , ( 1 , 0 ) , ( 1 , 1 ) with probabilities 61 , 61,6 1 , 21 respectively .\nNow , suppose x1 turns out to be 1 , and consider agent A1 's bid in the first round .\nIt is given by\nOn the other hand , if x1 turns out to be 0 , agent A1 's bid would be given by\nThus , irrespective of her bit , A1 will bid 0.5 in the first round .\nNote that the function and distribution are symmetric between x and y , and so the same argument shows that B1 will also bid 0.5 in the first round .\nThus , the price p1 announced at the end of the first round reveals no information about x1 or y1 .\nThe reason this occurs is that , under this distribution , the second carry bit C2 is statistically independent of the first carry bit ( x1 \u2227 y1 ) ; we will use this trick again in the general construction .\nNow , suppose that ( x2 , y2 ) is either ( 0 , 1 ) or ( 1 , 0 ) .\nThen , even if x2 and y2 are completely revealed by the first-round price , the value of C2 ( x1 , x2 , y1 , y2 ) is not revealed : It will be 1 if x1 = y1 = 1 and 0 otherwise .\nThus , we have shown that at least 2 rounds of trading will be required to reveal the function value in this case .\nWe now extend this construction to show by induction that the function Cn takes n rounds to reach an equilibrium in the worst case .\nTHEOREM 5 .\nThere is a function Cn with 2n inputs and a prior distribution Pn such that , in the worst case , the market takes n rounds to reveal the value of Cn ( \u00b7 ) .\nProof : We prove the theorem by induction on n .\nThe base case for n = 2 has already been shown to be true .\nStarting from the distribution P2 described above , we construct the distributions P3 , P4 , ... , Pn by inductively applying the following rule :\n\u2022 Let x-n denote the vector ( x1 , x2 , ... , xn-1 ) , and define y-n similarly .\nWe extend the distribution Pn-1 on ( x-n , y-n ) to a distribution Pn on ( x , y ) by specifying the conditional distribution of ( xn , yn ) given ( x-n , y-n ) : If Cn-1 ( x-n , y-n ) = 1 , then ( xn , yn ) takes the values ( 0 , 0 ) , ( 0 , 1 ) , ( 1 , 0 ) , ( 1 , 1 ) with probabili\nties 21 , 61 , 61 , 61 respectively .\nOtherwise , ( xn , yn ) takes\nProof of claim : A similar calculation to that used for C2 above shows that the value of Cn ( x , y ) under this distribution is statistically independent of Cn-1 ( x-n , y-n ) .\nFor i < n , xi can affect the value of Cn only through Cn-1 .\nAlso , by contruction of Pn , given the value of Cn-1 , the distribution of Cn is independent of xi .\nIt follows that Cn ( x , y ) is statistically independent of xi as well .\nOf course , a similar result holds for yi by symmetry .\nThus , in the first round , for all i = 1 , 2 , ... , n \u2212 1 , the bids of agents Ai and Bi do not reveal anything about their private information .\nThus , the first-round price does not reveal any information about the value of ( x-n , y-n ) .\nOn the other hand , agents An and Bn do have different expectations of Cn ( x ) depending on whether their input bit is a 0 or a 1 ; thus , the first-round price does reveal whether neither , one , or both of xn and yn are 1 .\nNow , consider a situation in which ( xn , yn ) takes on the value ( 1 , 0 ) or ( 0 , 1 ) .\nWe show that , in this case , after one round we are left with the residual problem of computing the value of Cn-1 ( x-n , y-n ) under the prior Pn-1 .\nClearly , when xn + yn = 1 , Cn ( x , y ) = Cn-1 ( x-n , y-n ) .\nFurther , according to the construction of Pn , the event ( xn + yn = 1 ) has the same probability ( 1/3 ) for all values of ( x-n , y-n ) .\nThus , conditioning on this fact does not alter the probability distribution over ( x-n , y-n ) ; it must still be Pn-1 .\nFinally , the inductive assumption tells us that solving this residual problem will take at least n \u2212 1 more rounds in the worst case and hence that finding the value of Cn ( x , y ) takes at least n rounds in the worst case .\n\u2737\n5 .\nDISCUSSION\nOur results have been derived in a simplified model of an information market .\nIn this section , we discuss the applicability of these results to more general trading models .\nAssuming that agents bid truthfully , Theorem 2 holds in any model in which the price is a known stochastically monotone aggregate of agents ' bids .\nWhile it seems reasonable that the market price satisfies monotonicity properties , the exact form of the aggregate function may not be known if the volume of each user 's trades is not observable ; this depends on the details of the market process .\nTheorem 3 and Theorem 5 hold more generally ; they only require that an agent 's strategy depends only on her conditional expectation of the security 's value .\nPerhaps the most fragile result is Theorem 4 , which relies on the linear form of the Shapley-Shubik clearing price ( in addition to the conditions for Theorem 2 ) ; however , it seems plausible that a similar dimension-based bound will hold for other families of nonlinear clearing prices .\nUp to this point , we have described the model with the same number of agents as bits of information .\nHowever , all the results hold even if there is competition in the form of a known number of agents who know each bit of information .\nIndeed , modeling such competition may help alleviate the strategic problems in our current model .\nAnother interesting approach to addressing the strategic issue is to consider alternative markets that are at least myopically incentive compatible .\nOne example is a market mechanism called a market scoring rule , suggested by Hanson [ 12 ] .\nThese markets have the property that a riskneutral agent 's best myopic strategy is to truthfully bid her current expected value of the security .\nAdditionally , the number of securities involved in each trade is fixed and publicly known .\nIf the market structure is such that , for example , the current scoring rule is posted publicly after each agent 's trade , then in equilibrium there is common knowledge of all agents ' expectation , and hence Theorem 2 holds .\nTheorem 3 also applies in this case , and hence we have the same characterization for the set of computable Boolean functions .\nThis suggests that the problem of eliciting truthful responses may be orthogonal to the problem of computing the desired aggregate , reminiscent of the revelation principle [ 18 ] .\nIn this paper , we have restricted our attention to the simplest possible aggregation problem : computing Boolean functions of Boolean inputs .\nThe proofs of Theorems 3 and 5 also hold if we consider Boolean functions of real inputs , where each agent 's private information is a real number .\nFurther , Theorem 2 also holds provided the market reaches equilibrium .\nWith real inputs and arbitrary prior distributions , however , it is not clear that the market will reach an equilibrium in a finite number of steps .\n6 .\nCONCLUSION 6.1 Summary\nWe have framed the process of information aggregation in markets as a computation on distributed information .\nWe have developed a simplified model of an information market that we believe captures many of the important aspects of real agent interaction in an information market .\nWithin this model , we prove several results characterizing precisely what the market can compute and how quickly .\nSpecifically , we show that the market is guaranteed to converge to the true rational expectations equilibrium if and only if the security payoff function is a weighted threshold function .\nWe prove that the process whereby agents reveal their information over time and learn from the resulting announced prices takes at most n rounds to converge to the correct full-information price in the worst case .\nWe show that this bound is tight within a factor of two .\n6.2 Future work\nWe view this paper as a first step towards understanding the computational power of information markets .\nSome interesting and important next steps include gaining a better understanding of the following : 9 The effect of price accuracy and precision : We have assumed that the clearing price is known with unlimited precision ; in practice , this will not be true .\nFurther , we have neglected influences on the market price other than from rational traders ; the market price may also be influenced by other factors such as misinformed or irrational traders .\nIt is interesting to ask what aggregates can be computed even in the presence of noisy prices .\n9 Incremental updates : If the agents have computed the value of the function and a small number of input bits are switched , can the new value of the function be computed incrementally and quickly ?\n9 Distributed computation : In our model , distributed information is aggregated through a centralized market\ncomputation .\nIn a sense , some of the computation itself is distributed among the participating agents , but can the market computation also be distributed ?\nFor example , can we find a good distributed-computational model of a decentralized market ?\n9 Agents ' computation : We have not accounted for the complexity of the computations that agents must do to accurately update their beliefs after each round .\n9 Strategic market models : For reasons of simplicity and tractability , we have directly assumed that agents bid truthfully .\nA more satisfying approach would be to assume only rationality and solve for the resulting gametheoretic solution strategy , either in our current computational model or another model of an information market .\n9 The common-prior assumption : Can we say anything about the market behavior when agents ' priors are only approximately the same or when they differ greatly ?\n9 Average-case analysis : Our negative results ( Theorems 3 and 5 ) examine worst-case scenarios , and thus involve very specific prior probability distributions .\nIt is interesting to ask whether we would get very different results for generic prior distributions .\n9 Information market design : Non-threshold functions can be implemented by layering two or more threshold functions together .\nWhat is the minimum number of threshold securities required to implement a given function ?\nThis is exactly the problem of minimizing the size of a neural network , a well-studied problem known to be NP-hard [ 15 ] .\nWhat configuration of securities can best approximate a given function ?\nAre there ways to define and configure securities to speed up convergence to equilibrium ?\nWhat is the relationship between machine learning ( e.g. , neural-network learning ) and information-market design ?"} {"id": "H-11", "title": "", "abstract": "", "keyphrases": ["relev feedback", "imag represent", "contentbas imag retriev", "activ learn", "least squar regress model", "optim experiment design", "top return imag", "precis rate", "intrins geometr structur", "patten recognit", "label", "imag retriev", "activ learn"], "prmu": [], "lvl-1": "Laplacian Optimal Design for Image Retrieval Xiaofei He Yahoo! Burbank, CA 91504 hex@yahoo-inc.com Wanli Min IBM Yorktown Heights, NY 10598 wanlimin@us.ibm.com Deng Cai CS Dept., UIUC Urbana, IL 61801 dengcai2@cs.uiuc.edu Kun Zhou Microsoft Research Asia Beijing, China kunzhou@microsoft.com ABSTRACT Relevance feedback is a powerful technique to enhance ContentBased Image Retrieval (CBIR) performance.\nIt solicits the user``s relevance judgments on the retrieved images returned by the CBIR systems.\nThe user``s labeling is then used to learn a classifier to distinguish between relevant and irrelevant images.\nHowever, the top returned images may not be the most informative ones.\nThe challenge is thus to determine which unlabeled images would be the most informative (i.e., improve the classifier the most) if they were labeled and used as training samples.\nIn this paper, we propose a novel active learning algorithm, called Laplacian Optimal Design (LOD), for relevance feedback image retrieval.\nOur algorithm is based on a regression model which minimizes the least square error on the measured (or, labeled) images and simultaneously preserves the local geometrical structure of the image space.\nSpecifically, we assume that if two images are sufficiently close to each other, then their measurements (or, labels) are close as well.\nBy constructing a nearest neighbor graph, the geometrical structure of the image space can be described by the graph Laplacian.\nWe discuss how results from the field of optimal experimental design may be used to guide our selection of a subset of images, which gives us the most amount of information.\nExperimental results on Corel database suggest that the proposed approach achieves higher precision in relevance feedback image retrieval.\nCategories and Subject Descriptors H.3.3 [Information storage and retrieval]: Information search and retrieval-Relevance feedback; G.3 [Mathematics of Computing]: Probability and Statistics-Experimental design General Terms Algorithms, Performance, Theory 1.\nINTRODUCTION In many machine learning and information retrieval tasks, there is no shortage of unlabeled data but labels are expensive.\nThe challenge is thus to determine which unlabeled samples would be the most informative (i.e., improve the classifier the most) if they were labeled and used as training samples.\nThis problem is typically called active learning [4].\nHere the task is to minimize an overall cost, which depends both on the classifier accuracy and the cost of data collection.\nMany real world applications can be casted into active learning framework.\nParticularly, we consider the problem of relevance feedback driven Content-Based Image Retrieval (CBIR) [13].\nContent-Based Image Retrieval has attracted substantial interests in the last decade [13].\nIt is motivated by the fast growth of digital image databases which, in turn, require efficient search schemes.\nRather than describe an image using text, in these systems an image query is described using one or more example images.\nThe low level visual features (color, texture, shape, etc.) are automatically extracted to represent the images.\nHowever, the low level features may not accurately characterize the high level semantic concepts.\nTo narrow down the semantic gap, relevance feedback is introduced into CBIR [12].\nIn many of the current relevance feedback driven CBIR systems, the user is required to provide his/her relevance judgments on the top images returned by the system.\nThe labeled images are then used to train a classifier to separate images that match the query concept from those that do not.\nHowever, in general the top returned images may not be the most informative ones.\nIn the worst case, all the top images labeled by the user may be positive and thus the standard classification techniques can not be applied due to the lack of negative examples.\nUnlike the standard classification problems where the labeled samples are pregiven, in relevance feedback image retrieval the system can actively select the images to label.\nThus active learning can be naturally introduced into image retrieval.\nDespite many existing active learning techniques, Support Vector Machine (SVM) active learning [14] and regression based active learning [1] have received the most interests.\nBased on the observation that the closer to the SVM boundary an image is, the less reliable its classification is, SVM active learning selects those unlabeled images closest to the boundary to solicit user feedback so as to achieve maximal refinement on the hyperplane between the two classes.\nThe major disadvantage of SVM active learning is that the estimated boundary may not be accurate enough.\nMoreover, it may not be applied at the beginning of the retrieval when there is no labeled images.\nSome other SVM based active learning algorithms can be found in [7], [9].\nIn statistics, the problem of selecting samples to label is typically referred to as experimental design.\nThe sample x is referred to as experiment, and its label y is referred to as measurement.\nThe study of optimal experimental design (OED) [1] is concerned with the design of experiments that are expected to minimize variances of a parameterized model.\nThe intent of optimal experimental design is usually to maximize confidence in a given model, minimize parameter variances for system identification, or minimize the model``s output variance.\nClassical experimental design approaches include A-Optimal Design, D-Optimal Design, and E-Optimal Design.\nAll of these approaches are based on a least squares regression model.\nComparing to SVM based active learning algorithms, experimental design approaches are much more efficient in computation.\nHowever, this kind of approaches takes only measured (or, labeled) data into account in their objective function, while the unmeasured (or, unlabeled) data is ignored.\nBenefit from recent progresses on optimal experimental design and semi-supervised learning, in this paper we propose a novel active learning algorithm for image retrieval, called Laplacian Optimal Design (LOD).\nUnlike traditional experimental design methods whose loss functions are only defined on the measured points, the loss function of our proposed LOD algorithm is defined on both measured and unmeasured points.\nSpecifically, we introduce a locality preserving regularizer into the standard least-square-error based loss function.\nThe new loss function aims to find a classifier which is locally as smooth as possible.\nIn other words, if two points are sufficiently close to each other in the input space, then they are expected to share the same label.\nOnce the loss function is defined, we can select the most informative data points which are presented to the user for labeling.\nIt would be important to note that the most informative images may not be the top returned images.\nThe rest of the paper is organized as follows.\nIn Section 2, we provide a brief description of the related work.\nOur proposed Laplacian Optimal Design algorithm is introduced in Section 3.\nIn Section 4, we compare our algorithm with the state-or-the-art algorithms and present the experimental results on image retrieval.\nFinally, we provide some concluding remarks and suggestions for future work in Section 5.\n2.\nRELATED WORK Since our proposed algorithm is based on regression framework.\nThe most related work is optimal experimental design [1], including A-Optimal Design, D-Optimal Design, and EOptimal Design.\nIn this Section, we give a brief description of these approaches.\n2.1 The Active Learning Problem The generic problem of active learning is the following.\nGiven a set of points A = {x1, x2, \u00b7 \u00b7 \u00b7 , xm} in Rd , find a subset B = {z1, z2, \u00b7 \u00b7 \u00b7 , zk} \u2282 A which contains the most informative points.\nIn other words, the points zi(i = 1, \u00b7 \u00b7 \u00b7 , k) can improve the classifier the most if they are labeled and used as training points.\n2.2 Optimal Experimental Design We consider a linear regression model y = wT x + (1) where y is the observation, x is the independent variable, w is the weight vector and is an unknown error with zero mean.\nDifferent observations have errors that are independent, but with equal variances \u03c32 .\nWe define f(x) = wT x to be the learner``s output given input x and the weight vector w. Suppose we have a set of labeled sample points (z1, y1), \u00b7 \u00b7 \u00b7 , (zk, yk), where yi is the label of zi.\nThus, the maximum likelihood estimate for the weight vector, \u02c6w, is that which minimizes the sum squared error Jsse(w) = k i=1 wT zi \u2212 yi 2 (2) The estimate \u02c6w gives us an estimate of the output at a novel input: \u02c6y = \u02c6wT x. By Gauss-Markov theorem, we know that \u02c6w \u2212 w has a zero mean and a covariance matrix given by \u03c32 H\u22121 sse, where Hsse is the Hessian of Jsse(w) Hsse = \u22022 Jsse \u2202w2 = k i=1 zizT i = ZZT where Z = (z1, z2, \u00b7 \u00b7 \u00b7 , zk).\nThe three most common scalar measures of the size of the parameter covariance matrix in optimal experimental design are: \u2022 D-optimal design: determinant of Hsse.\n\u2022 A-optimal design: trace of Hsse.\n\u2022 E-optimal design: maximum eigenvalue of Hsse.\nSince the computation of the determinant and eigenvalues of a matrix is much more expensive than the computation of matrix trace, A-optimal design is more efficient than the other two.\nSome recent work on experimental design can be found in [6], [16].\n3.\nLAPLACIAN OPTIMAL DESIGN Since the covariance matrix Hsse used in traditional approaches is only dependent on the measured samples, i.e. zi``s, these approaches fail to evaluate the expected errors on the unmeasured samples.\nIn this Section, we introduce a novel active learning algorithm called Laplacian Optimal Design (LOD) which makes efficient use of both measured (labeled) and unmeasured (unlabeled) samples.\n3.1 The Objective Function In many machine learning problems, it is natural to assume that if two points xi, xj are sufficiently close to each other, then their measurements (f(xi), f(xj)) are close as well.\nLet S be a similarity matrix.\nThus, a new loss function which respects the geometrical structure of the data space can be defined as follows: J0(w) = k i=1 f(zi)\u2212yi 2 + \u03bb 2 m i,j=1 f(xi)\u2212f(xj) 2 Sij (3) where yi is the measurement (or, label) of zi.\nNote that, the loss function (3) is essentially the same as the one used in Laplacian Regularized Regression (LRR, [2]).\nHowever, LRR is a passive learning algorithm where the training data is given.\nIn this paper, we are focused on how to select the most informative data for training.\nThe loss function with our choice of symmetric weights Sij (Sij = Sji) incurs a heavy penalty if neighboring points xi and xj are mapped far apart.\nTherefore, minimizing J0(w) is an attempt to ensure that if xi and xj are close then f(xi) and f(xj) are close as well.\nThere are many choices of the similarity matrix S.\nA simple definition is as follows: Sij = \u23a7 \u23a8 \u23a9 1, if xi is among the p nearest neighbors of xj, or xj is among the p nearest neighbors of xi; 0, otherwise.\n(4) Let D be a diagonal matrix, Dii = j Sij, and L = D\u2212S.\nThe matrix L is called graph Laplacian in spectral graph theory [3].\nLet y = (y1, \u00b7 \u00b7 \u00b7 , yk)T and X = (x1, \u00b7 \u00b7 \u00b7 , xm).\nFollowing some simple algebraic steps, we see that: J0(w) = k i=1 wT zi \u2212 yi 2 + \u03bb 2 m i,j=1 wT xi \u2212 wT xj 2 Sij = y \u2212 ZT w T y \u2212 ZT w + \u03bbwT m i=1 DiixixT i \u2212 m i,j=1 SijxixT j w = yT y \u2212 2wT Zy + wT ZZT w +\u03bbwT XDXT \u2212 XSXT w = yT y \u2212 2wT Zy + wT ZZT + \u03bbXLXT w The Hessian of J0(w) can be computed as follows: H0 = \u22022 J0 \u2202w2 = ZZT + \u03bbXLXT In some cases, the matrix ZZT +\u03bbXLXT is singular (e.g. if m < d).\nThus, there is no stable solution to the optimization problem Eq.\n(3).\nA common way to deal with this ill-posed problem is to introduce a Tikhonov regularizer into our loss function: J(w) = k i=1 wT zi \u2212 yi 2 + \u03bb1 2 m i,j=1 wT xi \u2212 wT xj 2 Sij +\u03bb2 w 2 (5) The Hessian of the new loss function is given by: H = \u22022 J \u2202w2 = ZZT + \u03bb1XLXT + \u03bb2I := ZZT + \u039b where I is an identity matrix and \u039b = \u03bb1XLXT + \u03bb2I.\nClearly, H is of full rank.\nRequiring that the gradient of J(w) with respect to w vanish gives the optimal estimate \u02c6w: \u02c6w = H\u22121 Zy The following proposition states the bias and variance properties of the estimator for the coefficient vector w. Proposition 3.1.\nE( \u02c6w \u2212 w) = \u2212H\u22121 \u039bw, Cov( \u02c6w) = \u03c32 (H\u22121 \u2212 H\u22121 \u039bH\u22121 ) Proof.\nSince y = ZT w + and E( ) = 0, it follows that E( \u02c6w \u2212 w) (6) = H\u22121 ZZT w \u2212 w = H\u22121 (ZZT + \u039b \u2212 \u039b)w \u2212 w = (I \u2212 H\u22121 \u039b)w \u2212 w = \u2212H\u22121 \u039bw (7) Notice Cov(y) = \u03c32 I, the covariance matrix of \u02c6w has the expression: Cov( \u02c6w) = H\u22121 ZCov(y)ZT H\u22121 = \u03c32 H\u22121 ZZT H\u22121 = \u03c32 H\u22121 (H \u2212 \u039b)H\u22121 = \u03c32 (H\u22121 \u2212 H\u22121 \u039bH\u22121 ) (8) Therefore mean squared error matrix for the coefficients w is E(w \u2212 \u02c6w)(w \u2212 \u02c6w)T (9) = H\u22121 \u039bwwT \u039bH\u22121 + \u03c32 (H\u22121 \u2212 H\u22121 \u039bH\u22121 ) (10) For any x, let \u02c6y = \u02c6wT x be its predicted observation.\nThe expected squared prediction error is E(y \u2212 \u02c6y)2 = E( + wT x \u2212 \u02c6wT x)2 = \u03c32 + xT [E(w \u2212 \u02c6w)(w \u2212 \u02c6w)T ]x = \u03c32 + xT [H\u22121 \u039bwwT \u039bH\u22121 + \u03c32 H\u22121 \u2212 \u03c32 H\u22121 \u039bH\u22121 ]x Clearly the expected square prediction error depends on the explanatory variable x, therefore average expected square predictive error over the complete data set A is 1 m m i=1 E(yi \u2212 \u02c6wT xi)2 = 1 m m i=1 xT i [H\u22121 \u039bwwT \u039bH\u22121 + \u03c32 H\u22121 \u2212 \u03c32 H\u22121 \u039bH\u22121 ]xi +\u03c32 = 1 m Tr(XT [\u03c32 H\u22121 + H\u22121 \u039bwwT \u039bH\u22121 \u2212 \u03c32 H\u22121 \u039bH\u22121 ]X) +\u03c32 Since Tr(XT [H\u22121 \u039bwwT \u039bH\u22121 \u2212 \u03c32 H\u22121 \u039bH\u22121 ]X) Tr(\u03c32 XT H\u22121 X), Our Laplacian optimality criterion is thus formulated by minimizing the trace of XT H\u22121 X. Definition 1.\nLaplacian Optimal Design min Z=(z1,\u00b7\u00b7\u00b7 ,zk) Tr XT ZZT + \u03bb1XLXT + \u03bb2I \u22121 X (11) where z1, \u00b7 \u00b7 \u00b7 , zk are selected from {x1, \u00b7 \u00b7 \u00b7 , xm}.\n4.\nKERNEL LAPLACIAN OPTIMAL DESIGN Canonical experimental design approaches (e.g. A-Optimal Design, D-Optimal Design, and E-Optimal) only consider linear functions.\nThey fail to discover the intrinsic geometry in the data when the data space is highly nonlinear.\nIn this section, we describe how to perform Laplacian Experimental Design in Reproducing Kernel Hilbert Space (RKHS) which gives rise to Kernel Laplacian Experimental Design (KLOD).\nFor given data points x1, \u00b7 \u00b7 \u00b7 , xm \u2208 X with a positive definite mercer kernel K : X \u00d7X \u2192 R, there exists a unique RKHS HK of real valued functions on X. Let Kt(s) be the function of s obtained by fixing t and letting Kt(s) .\n= K(s, t).\nHK consists of all finite linear combinations of the form l i=1 \u03b1iKti with ti \u2208 X and limits of such functions as the ti become dense in X.\nWe have Ks, Kt HK = K(s, t).\n4.1 Derivation of LOD in Reproducing Kernel Hilbert Space Consider the optimization problem (5) in RKHS.\nThus, we seek a function f \u2208 HK such that the following objective function is minimized: min f\u2208HK k i=1 f(zi)\u2212yi 2 + \u03bb1 2 m i,j=1 f(xi)\u2212f(xj) 2 Sij+\u03bb2 f 2 HK (12) We have the following proposition.\nProposition 4.1.\nLet H = { m i=1 \u03b1iK(\u00b7, xi)|\u03b1i \u2208 R} be a subspace of HK , the solution to the problem (12) is in H. Proof.\nLet H\u22a5 be the orthogonal complement of H, i.e. HK = H \u2295 H\u22a5 .\nThus, for any function f \u2208 HK , it has orthogonal decomposition as follows: f = fH + fH\u22a5 Now, let``s evaluate f at xi: f(xi) = f, Kxi HK = fH + fH\u22a5 , Kxi HK = fH, Kxi HK + fH\u22a5 , Kxi HK Notice that Kxi \u2208 H while fH\u22a5 \u2208 H\u22a5 .\nThis implies that fH\u22a5 , Kxi HK = 0.\nTherefore, f(xi) = fH, Kxi HK = fH(xi) This completes the proof.\nProposition 4.1 tells us the minimizer of problem (12) admits a representation f\u2217 = m i=1 \u03b1iK(\u00b7, xi).\nPlease see [2] for the details.\nLet \u03c6 : Rd \u2192 H be a feature map from the input space Rd to H, and K(xi, xj) =< \u03c6(xi), \u03c6(xj) >.\nLet X denote the data matrix in RKHS, X = (\u03c6(x1), \u03c6(x2), \u00b7 \u00b7 \u00b7 , \u03c6(xm)).\nSimilarly, we define Z = (\u03c6(z1), \u03c6(z2), \u00b7 \u00b7 \u00b7 , \u03c6(zk)).\nThus, the optimization problem in RKHS can be written as follows: min Z Tr XT ZZT + \u03bb1XLXT + \u03bb2I \u22121 X (13) Since the mapping function \u03c6 is generally unknown, there is no direct way to solve problem (13).\nIn the following, we apply kernel tricks to solve this optimization problem.\nLet X\u22121 be the Moore-Penrose inverse (also known as pseudo inverse) of X. Thus, we have: XT ZZT + \u03bb1XLXT + \u03bb2I \u22121 X = XT XX\u22121 ZZT + \u03bb1XLXT + \u03bb2I \u22121 (XT )\u22121 XT X = XT X ZZT X + \u03bb1XLXT X + \u03bb2X \u22121 (XT )\u22121 XT X = XT X XT ZZT X + \u03bb1XT XLXT X + \u03bb2XT X \u22121 XT X = KXX KXZKZX + \u03bb1KXXLKXX + \u03bb2KXX \u22121 KXX where KXX is a m \u00d7 m matrix (KXX,ij = K(xi, xj)), KXZ is a m\u00d7k matrix (KXZ,ij = K(xi, zj)), and KZX is a k\u00d7m matrix (KZX,ij = K(zi, xj)).\nThus, the Kernel Laplacian Optimal Design can be defined as follows: Definition 2.\nKernel Laplacian Optimal Design minZ=(z1,\u00b7\u00b7\u00b7 ,zk) Tr KXX KXZKZX + \u03bb1KXXLKXX \u03bb2KXX \u22121 KXX (14) 4.2 Optimization Scheme In this subsection, we discuss how to solve the optimization problems (11) and (14).\nParticularly, if we select a linear kernel for KLOD, then it reduces to LOD.\nTherefore, we will focus on problem (14) in the following.\nIt can be shown that the optimization problem (14) is NP-hard.\nIn this subsection, we develop a simple sequential greedy approach to solve (14).\nSuppose n points have been selected, denoted by a matrix Zn = (z1, \u00b7 \u00b7 \u00b7 , zn).\nThe (n + 1)-th point zn+1 can be selected by solving the following optimization problem: max Zn+1=(Zn,zn+1) Tr KXX KXZn+1 KZn+1X + \u03bb1KXXLKXX + \u03bb2KXX \u22121 KXX (15) The kernel matrices KXZn+1 and KZn+1X can be rewritten as follows: KXZn+1 = KXZn , KXzn+1 , KZn+1X = KZnX Kzn+1X Thus, we have: KXZn+1 KZn+1X = KXZn KZnX + KXzn+1 Kzn+1X We define: A = KXZn KZnX + \u03bb1KXXLKXX + \u03bb2KXX A is only dependent on X and Zn .\nThus, the (n + 1)-th point zn+1 is given by: zn+1 = arg min zn+1 Tr KXX A + KXzn+1 Kzn+1X \u22121 KXX (16) Each time we select a new point zn+1, the matrix A is updated by: A \u2190 A + KXzn+1 Kzn+1X If the kernel function is chosen as inner product K(x, y) = x, y , then HK is a linear functional space and the algorithm reduces to LOD.\n5.\nCONTENT-BASED IMAGE RETRIEVAL USING LAPLACIAN OPTIMAL DESIGN In this section, we describe how to apply Laplacian Optimal Design to CBIR.\nWe begin with a brief description of image representation using low level visual features.\n5.1 Low-Level Image Representation Low-level image representation is a crucial problem in CBIR.\nGeneral visual features includes color, texture, shape, etc..\nColor and texture features are the most extensively used visual features in CBIR.\nCompared with color and texture features, shape features are usually described after images have been segmented into regions or objects.\nSince robust and accurate image segmentation is difficult to achieve, the use of shape features for image retrieval has been limited to special applications where objects or regions are readily available.\nIn this work, We combine 64-dimensional color histogram and 64-dimensional Color Texture Moment (CTM, [15]) to represent the images.\nThe color histogram is calculated using 4 \u00d7 4 \u00d7 4 bins in HSV space.\nThe Color Texture Moment is proposed by Yu et al. [15], which integrates the color and texture characteristics of the image in a compact form.\nCTM adopts local Fourier transform as a texture representation scheme and derives eight characteristic maps to describe different aspects of co-occurrence relations of image pixels in each channel of the (SVcosH, SVsinH, V) color space.\nThen CTM calculates the first and second moment of these maps as a representation of the natural color image pixel distribution.\nPlease see [15] for details.\n5.2 Relevance Feedback Image Retrieval Relevance feedback is one of the most important techniques to narrow down the gap between low level visual features and high level semantic concepts [12].\nTraditionally, the user``s relevance feedbacks are used to update the query vector or adjust the weighting of different dimensions.\nThis process can be viewed as an on-line learning process in which the image retrieval system acts as a learner and the user acts as a teacher.\nThey typical retrieval process is outlined as follows: 1.\nThe user submits a query image example to the system.\nThe system ranks the images in database according to some pre-defined distance metric and presents to the user the top ranked images.\n2.\nThe system selects some images from the database and request the user to label them as relevant or irrelevant.\n3.\nThe system uses the user``s provided information to rerank the images in database and returns to the user the top images.\nGo to step 2 until the user is satisfied.\nOur Laplacian Optimal Design algorithm is applied in the second step for selecting the most informative images.\nOnce we get the labels for the images selected by LOD, we apply Laplacian Regularized Regression (LRR, [2]) to solve the optimization problem (3) and build the classifier.\nThe classifier is then used to re-rank the images in database.\nNote that, in order to reduce the computational complexity, we do not use all the unlabeled images in the database but only those within top 500 returns of previous iteration.\n6.\nEXPERIMENTAL RESULTS In this section, we evaluate the performance of our proposed algorithm on a large image database.\nTo demonstrate the effectiveness of our proposed LOD algorithm, we compare it with Laplacian Regularized Regression (LRR, [2]), Support Vector Machine (SVM), Support Vector Machine Active Learning (SVMactive) [14], and A-Optimal Design (AOD).\nBoth SVMactive, AOD, and LOD are active learning algorithms, while LRR and SVM are standard classification algorithms.\nSVM only makes use of the labeled images, while LRR is a semi-supervised learning algorithm which makes use of both labeled and unlabeled images.\nFor SVMactive, AOD, and LOD, 10 training images are selected by the algorithms themselves at each iteration.\nWhile for LRR and SVM, we use the top 10 images as training data.\nIt would be important to note that SVMactive is based on the ordinary SVM, LOD is based on LRR, and AOD is based on the ordinary regression.\nThe parameters \u03bb1 and \u03bb2 in our LOD algorithm are empirically set to be 0.001 and 0.00001.\nFor both LRR and LOD algorithms, we use the same graph structure (see Eq.\n4) and set the value of p (number of nearest neighbors) to be 5.\nWe begin with a simple synthetic example to give some intuition about how LOD works.\n6.1 Simple Synthetic Example A simple synthetic example is given in Figure 1.\nThe data set contains two circles.\nEight points are selected by AOD and LOD.\nAs can be seen, all the points selected by AOD are from the big circle, while LOD selects four points from the big circle and four from the small circle.\nThe numbers beside the selected points denote their orders to be selected.\nClearly, the points selected by our LOD algorithm can better represent the original data set.\nWe did not compare our algorithm with SVMactive because SVMactive can not be applied in this case due to the lack of the labeled points.\n6.2 Image Retrieval Experimental Design The image database we used consists of 7,900 images of 79 semantic categories, from COREL data set.\nIt is a large and heterogeneous image set.\nEach image is represented as a 128-dimensional vector as described in Section 5.1.\nFigure 2 shows some sample images.\nTo exhibit the advantages of using our algorithm, we need a reliable way of evaluating the retrieval performance and the comparisons with other algorithms.\nWe list different aspects of the experimental design below.\n6.2.1 Evaluation Metrics We use precision-scope curve and precision rate [10] to evaluate the effectiveness of the image retrieval algorithms.\nThe scope is specified by the number (N) of top-ranked images presented to the user.\nThe precision is the ratio of the number of relevant images presented to the user to the (a) Data set 1 2 3 4 5 6 7 8 (b) AOD 1 2 3 4 5 6 7 8 (c) LOD Figure 1: Data selection by active learning algorithms.\nThe numbers beside the selected points denote their orders to be selected.\nClearly, the points selected by our LOD algorithm can better represent the original data set.\nNote that, the SVMactive algorithm can not be applied in this case due to the lack of labeled points.\n(a) (b) (c) Figure 2: Sample images from category bead, elephant, and ship.\nscope N.\nThe precision-scope curve describes the precision with various scopes and thus gives an overall performance evaluation of the algorithms.\nOn the other hand, the precision rate emphasizes the precision at a particular value of scope.\nIn general, it is appropriate to present 20 images on a screen.\nPutting more images on a screen may affect the quality of the presented images.\nTherefore, the precision at top 20 (N = 20) is especially important.\nIn real world image retrieval systems, the query image is usually not in the image database.\nTo simulate such environment, we use five-fold cross validation to evaluate the algorithms.\nMore precisely, we divide the whole image database into five subsets with equal size.\nThus, there are 20 images per category in each subset.\nAt each run of cross validation, one subset is selected as the query set, and the other four subsets are used as the database for retrieval.\nThe precisionscope curve and precision rate are computed by averaging the results from the five-fold cross validation.\n6.2.2 Automatic Relevance Feedback Scheme We designed an automatic feedback scheme to model the retrieval process.\nFor each submitted query, our system retrieves and ranks the images in the database.\n10 images were selected from the database for user labeling and the label information is used by the system for re-ranking.\nNote that, the images which have been selected at previous iterations are excluded from later selections.\nFor each query, the automatic relevance feedback mechanism is performed for four iterations.\nIt is important to note that the automatic relevance feedback scheme used here is different from the ones described in [8], [11].\nIn [8], [11], the top four relevant and irrelevant images were selected as the feedback images.\nHowever, this may not be practical.\nIn real world image retrieval systems, it is possible that most of the top-ranked images are relevant (or, irrelevant).\nThus, it is difficult for the user to find both four relevant and irrelevant images.\nIt is more reasonable for the users to provide feedback information only on the 10 images selected by the system.\n6.3 Image Retrieval Performance In real world, it is not practical to require the user to provide many rounds of feedbacks.\nThe retrieval performance after the first two rounds of feedbacks (especially the first round) is more important.\nFigure 3 shows the average precision-scope curves of the different algorithms for the first two feedback iterations.\nAt the beginning of retrieval, the Euclidean distances in the original 128-dimensional space are used to rank the images in database.\nAfter the user provides relevance feedbacks, the LRR, SVM, SVMactive, AOD, and LOD algorithms are then applied to re-rank the images.\nIn order to reduce the time complexity of active learning algorithms, we didn``t select the most informative images from the whole database but from the top 500 images.\nFor LRR and SVM, the user is required to label the top 10 images.\nFor SVMactive, AOD, and LOD, the user is required to label 10 most informative images selected by these algorithms.\nNote that, SVMactive can only be ap(a) Feedback Iteration 1 (b) Feedback Iteration 2 Figure 3: The average precision-scope curves of different algorithms for the first two feedback iterations.\nThe LOD algorithm performs the best on the entire scope.\nNote that, at the first round of feedback, the SVMactive algorithm can not be applied.\nIt applies the ordinary SVM to build the initial classifier.\n(a) Precision at Top 10 (b) Precision at Top 20 (c) Precision at Top 30 Figure 4: Performance evaluation of the five learning algorithms for relevance feedback image retrieval.\n(a) Precision at top 10, (b) Precision at top 20, and (c) Precision at top 30.\nAs can be seen, our LOD algorithm consistently outperforms the other four algorithms.\nplied when the classifier is already built.\nTherefore, it can not be applied at the first round and we use the standard SVM to build the initial classifier.\nAs can be seen, our LOD algorithm outperforms the other four algorithms on the entire scope.\nAlso, the LRR algorithm performs better than SVM.\nThis is because that the LRR algorithm makes efficient use of the unlabeled images by incorporating a locality preserving regularizer into the ordinary regression objective function.\nThe AOD algorithm performs the worst.\nAs the scope gets larger, the performance difference between these algorithms gets smaller.\nBy iteratively adding the user``s feedbacks, the corresponding precision results (at top 10, top 20, and top 30) of the five algorithms are respectively shown in Figure 4.\nAs can be seen, our LOD algorithm performs the best in all the cases and the LRR algorithm performs the second best.\nBoth of these two algorithms make use of the unlabeled images.\nThis shows that the unlabeled images are helpful for discovering the intrinsic geometrical structure of the image space and therefore enhance the retrieval performance.\nIn real world, the user may not be willing to provide too many relevance feedbacks.\nTherefore, the retrieval performance at the first two rounds are especially important.\nAs can be seen, our LOD algorithm achieves 6.8% performance improvement for top 10 results, 5.2% for top 20 results, and 4.1% for top 30 results, comparing to the second best algorithm (LRR) after the first two rounds of relevance feedbacks.\n6.4 Discussion Several experiments on Corel database have been systematically performed.\nWe would like to highlight several interesting points: 1.\nIt is clear that the use of active learning is beneficial in the image retrieval domain.\nThere is a significant increase in performance from using the active learning methods.\nEspecially, out of the three active learning methods (SVMactive, AOD, LOD), our proposed LOD algorithm performs the best.\n2.\nIn many real world applications like relevance feedback image retrieval, there are generally two ways of reducing labor-intensive manual labeling task.\nOne is active learning which selects the most informative samples to label, and the other is semi-supervised learning which makes use of the unlabeled samples to enhance the learning performance.\nBoth of these two strategies have been studied extensively in the past [14], [7], [5], [8].\nThe work presented in this paper is focused on active learning, but it also takes advantage of the recent progresses on semi-supervised learning [2].\nSpecifically, we incorporate a locality preserving regularizer into the standard regression framework and find the most informative samples with respect to the new objective function.\nIn this way, the active learning and semi-supervised learning techniques are seamlessly unified for learning an optimal classifier.\n3.\nThe relevance feedback technique is crucial to image retrieval.\nFor all the five algorithms, the retrieval performance improves with more feedbacks provided by the user.\n7.\nCONCLUSIONS AND FUTURE WORK This paper describes a novel active learning algorithm, called Laplacian Optimal Design, to enable more effective relevance feedback image retrieval.\nOur algorithm is based on an objective function which simultaneously minimizes the empirical error and preserves the local geometrical structure of the data space.\nUsing techniques from experimental design, our algorithm finds the most informative images to label.\nThese labeled images and the unlabeled images in the database are used to learn a classifier.\nThe experimental results on Corel database show that both active learning and semi-supervised learning can significantly improve the retrieval performance.\nIn this paper, we consider the image retrieval problem on a small, static, and closed-domain image data.\nA much more challenging domain is the World Wide Web (WWW).\nFor Web image search, it is possible to collect a large amount of user click information.\nThis information can be naturally used to construct the affinity graph in our algorithm.\nHowever, the computational complexity in Web scenario may become a crucial issue.\nAlso, although our primary interest in this paper is focused on relevance feedback image retrieval, our results may also be of interest to researchers in patten recognition and machine learning, especially when a large amount of data is available but only a limited samples can be labeled.\n8.\nREFERENCES [1] A. C. Atkinson and A. N. Donev.\nOptimum Experimental Designs.\nOxford University Press, 2002.\n[2] M. Belkin, P. Niyogi, and V. Sindhwani.\nManifold regularization: A geometric framework for learning from examples.\nJournal of Machine Learning Research, 7:2399-2434, 2006.\n[3] F. R. K. Chung.\nSpectral Graph Theory, volume 92 of Regional Conference Series in Mathematics.\nAMS, 1997.\n[4] D. A. Cohn, Z. Ghahramani, and M. I. Jordan.\nActive learning with statistical models.\nJournal of Artificial Intelligence Research, 4:129-145, 1996.\n[5] A. Dong and B. Bhanu.\nA new semi-supervised em algorithm for image retrieval.\nIn IEEE Conf.\non Computer Vision and Pattern Recognition, Madison, WI, 2003.\n[6] P. Flaherty, M. I. Jordan, and A. P. Arkin.\nRobust design of biological experiments.\nIn Advances in Neural Information Processing Systems 18, Vancouver, Canada, 2005.\n[7] K.-S.\nGoh, E. Y. Chang, and W.-C.\nLai.\nMultimodal concept-dependent active learning for image retrieval.\nIn Proceedings of the ACM Conference on Multimedia, New York, October 2004.\n[8] X. He.\nIncremental semi-supervised subspace learning for image retrieval.\nIn Proceedings of the ACM Conference on Multimedia, New York, October 2004.\n[9] S. C. Hoi and M. R. Lyu.\nA semi-supervised active learning framework for image retrieval.\nIn IEEE International Conference on Computer Vision and Pattern Recognition, San Diego, CA, 2005.\n[10] D. P. Huijsmans and N. Sebe.\nHow to complete performance graphs in content-based image retrieval: Add generality and normalize scope.\nIEEE Transactions on Pattern Analysis and Machine Intelligence, 27(2):245-251, 2005.\n[11] Y.-Y.\nLin, T.-L.\nLiu, and H.-T.\nChen.\nSemantic manifold learning for image retrieval.\nIn Proceedings of the ACM Conference on Multimedia, Singapore, November 2005.\n[12] Y. Rui, T. S. Huang, M. Ortega, and S. Mehrotra.\nRelevance feedback: A power tool for interative content-based image retrieval.\nIEEE Transactions on Circuits and Systems for Video Technology, 8(5), 1998.\n[13] A. W. Smeulders, M. Worring, S. Santini, A. Gupta, and R. Jain.\nContent-based image retrieval at the end of the early years.\nIEEE Transactions on Pattern Analysis and Machine Intelligence, 22(12):1349-1380, 2000.\n[14] S. Tong and E. Chang.\nSupport vector machine active learning for image retrieval.\nIn Proceedings of the ninth ACM international conference on Multimedia, pages 107-118, 2001.\n[15] H. Yu, M. Li, H.-J.\nZhang, and J. Feng.\nColor texture moments for content-based image retrieval.\nIn International Conference on Image Processing, pages 24-28, 2002.\n[16] K. Yu, J. Bi, and V. Tresp.\nActive learning via transductive experimental design.\nIn Proceedings of the 23rd International Conference on Machine Learning, Pittsburgh, PA, 2006.", "lvl-3": "Laplacian Optimal Design for Imag e Retrieval\nABSTRACT\nRelevance feedback is a powerful technique to enhance ContentBased Image Retrieval ( CBIR ) performance .\nIt solicits the user 's relevance judgments on the retrieved images returned by the CBIR systems .\nThe user 's labeling is then used to learn a classifier to distinguish between relevant and irrelevant images .\nHowever , the top returned images may not be the most informative ones .\nThe challenge is thus to determine which unlabeled images would be the most informative ( i.e. , improve the classifier the most ) if they were labeled and used as training samples .\nIn this paper , we propose a novel active learning algorithm , called Laplacian Optimal Design ( LOD ) , for relevance feedback image retrieval .\nOur algorithm is based on a regression model which minimizes the least square error on the measured ( or , labeled ) images and simultaneously preserves the local geometrical structure of the image space .\nSpecifically , we assume that if two images are sufficiently close to each other , then their measurements ( or , labels ) are close as well .\nBy constructing a nearest neighbor graph , the geometrical structure of the image space can be described by the graph Laplacian .\nWe discuss how results from the field of optimal experimental design may be used to guide our selection of a subset of images , which gives us the most amount of information .\nExperimental results on Corel database suggest that the proposed approach achieves higher precision in relevance feedback image retrieval .\n1 .\nINTRODUCTION\nIn many machine learning and information retrieval tasks , there is no shortage of unlabeled data but labels are expensive .\nThe challenge is thus to determine which unlabeled samples would be the most informative ( i.e. , improve the classifier the most ) if they were labeled and used as training samples .\nThis problem is typically called active learning [ 4 ] .\nHere the task is to minimize an overall cost , which depends both on the classifier accuracy and the cost of data collection .\nMany real world applications can be casted into active learning framework .\nParticularly , we consider the problem of relevance feedback driven Content-Based Image Retrieval ( CBIR ) [ 13 ] .\nContent-Based Image Retrieval has attracted substantial interests in the last decade [ 13 ] .\nIt is motivated by the fast growth of digital image databases which , in turn , require efficient search schemes .\nRather than describe an image using text , in these systems an image query is described using one or more example images .\nThe low level visual features ( color , texture , shape , etc. ) are automatically extracted to represent the images .\nHowever , the low level features may not accurately characterize the high level semantic concepts .\nTo narrow down the semantic gap , relevance feedback is introduced into CBIR [ 12 ] .\nIn many of the current relevance feedback driven CBIR systems , the user is required to provide his/her relevance judgments on the top images returned by the system .\nThe labeled images are then used to train a classifier to separate images that match the query concept from those that do not .\nHowever , in general the top returned images may not be the most informative ones .\nIn the worst case , all the top images labeled by the user may be positive and thus the standard classification techniques can not be applied due to the lack of negative examples .\nUnlike the standard classification problems where the labeled samples are pregiven , in relevance feedback image retrieval the system can actively select the images to label .\nThus active learning can be naturally introduced into image retrieval .\nDespite many existing active learning techniques , Support Vector Machine ( SVM ) active learning [ 14 ] and regression based active learning [ 1 ] have received the most interests .\nBased on the observation that the closer to the SVM boundary an image is , the less reliable its classification is , SVM active learning selects those unlabeled images closest to the boundary to solicit user feedback so as to achieve maximal refinement on the hyperplane between the two classes .\nThe major disadvantage of SVM active learning is that the estimated boundary may not be accurate enough .\nMoreover , it may not be applied at the beginning of the retrieval when there is no labeled images .\nSome other SVM based active learning algorithms can be found in [ 7 ] , [ 9 ] .\nIn statistics , the problem of selecting samples to label is typically referred to as experimental design .\nThe sample x is referred to as experiment , and its label y is referred to as measurement .\nThe study of optimal experimental design ( OED ) [ 1 ] is concerned with the design of experiments that are expected to minimize variances of a parameterized model .\nThe intent of optimal experimental design is usually to maximize confidence in a given model , minimize parameter variances for system identification , or minimize the model 's output variance .\nClassical experimental design approaches include A-Optimal Design , D-Optimal Design , and E-Optimal Design .\nAll of these approaches are based on a least squares regression model .\nComparing to SVM based active learning algorithms , experimental design approaches are much more efficient in computation .\nHowever , this kind of approaches takes only measured ( or , labeled ) data into account in their objective function , while the unmeasured ( or , unlabeled ) data is ignored .\nBenefit from recent progresses on optimal experimental design and semi-supervised learning , in this paper we propose a novel active learning algorithm for image retrieval , called Laplacian Optimal Design ( LOD ) .\nUnlike traditional experimental design methods whose loss functions are only defined on the measured points , the loss function of our proposed LOD algorithm is defined on both measured and unmeasured points .\nSpecifically , we introduce a locality preserving regularizer into the standard least-square-error based loss function .\nThe new loss function aims to find a classifier which is locally as smooth as possible .\nIn other words , if two points are sufficiently close to each other in the input space , then they are expected to share the same label .\nOnce the loss function is defined , we can select the most informative data points which are presented to the user for labeling .\nIt would be important to note that the most informative images may not be the top returned images .\nThe rest of the paper is organized as follows .\nIn Section 2 , we provide a brief description of the related work .\nOur proposed Laplacian Optimal Design algorithm is introduced in Section 3 .\nIn Section 4 , we compare our algorithm with the state-or-the-art algorithms and present the experimental results on image retrieval .\nFinally , we provide some concluding remarks and suggestions for future work in Section 5 .\n2 .\nRELATED WORK\nSince our proposed algorithm is based on regression framework .\nThe most related work is optimal experimental design [ 1 ] , including A-Optimal Design , D-Optimal Design , and EOptimal Design .\nIn this Section , we give a brief description of these approaches .\n2.1 The Active Learning Problem\nThe generic problem of active learning is the following .\nGiven a set of points A = { x1 , x2 , \u00b7 \u00b7 \u00b7 , xm } in Rd , find a subset B = { z1 , z2 , \u00b7 \u00b7 \u00b7 , zk } C A which contains the most informative points .\nIn other words , the points zi ( i = 1 , \u00b7 \u00b7 \u00b7 , k ) can improve the classifier the most if they are labeled and used as training points .\n2.2 Optimal Experimental Design\nWe consider a linear regression model\nwhere y is the observation , x is the independent variable , w is the weight vector and ~ is an unknown error with zero mean .\nDifferent observations have errors that are independent , but with equal variances \u03c32 .\nWe define f ( x ) = wT x to be the learner 's output given input x and the weight vector w. Suppose we have a set of labeled sample points ( z1 , y1 ) , \u00b7 \u00b7 \u00b7 , ( zk , yk ) , where yi is the label of zi .\nThus , the maximum likelihood estimate for the weight vector , \u02c6w , is that which minimizes the sum squared error\nBy Gauss-Markov theorem , we know that w\u02c6 \u2212 w has a zero mean and a covariance matrix given by \u03c32H \u2212 1 sse , where Hsse is the Hessian of Jsse ( w )\nwhere Z = ( z1 , z2 , \u00b7 \u00b7 \u00b7 , zk ) .\nThe three most common scalar measures of the size of the parameter covariance matrix in optimal experimental design\nare : \u2022 D-optimal design : determinant of Hsse .\n\u2022 A-optimal design : trace of Hsse .\n\u2022 E-optimal design : maximum eigenvalue of Hsse .\nSince the computation of the determinant and eigenvalues of a matrix is much more expensive than the computation of matrix trace , A-optimal design is more efficient than the other two .\nSome recent work on experimental design can be found in [ 6 ] , [ 16 ] .\n3 .\nLAPLACIAN OPTIMAL DESIGN\n3.1 The Objective Function\n4 .\nKERNEL LAPLACIAN OPTIMAL DESIGN\n4.1 Derivation of LOD in Reproducing Kernel Hilbert Space\n4.2 Optimization Scheme\n5 .\nCONTENT-BASED IMAGE RETRIEVAL USING LAPLACIAN OPTIMAL DESIGN\n5.1 Low-Level Image Representation\n5.2 Relevance Feedback Image Retrieval\n6 .\nEXPERIMENTAL RESULTS\n6.1 Simple Synthetic Example\n6.2 Image Retrieval Experimental Design\n6.2.1 Evaluation Metrics\n6.2.2 Automatic Relevance Feedback Scheme\n6.3 Image Retrieval Performance\n6.4 Discussion\n7 .\nCONCLUSIONS AND FUTURE WORK\nThis paper describes a novel active learning algorithm , called Laplacian Optimal Design , to enable more effective relevance feedback image retrieval .\nOur algorithm is based on an objective function which simultaneously minimizes the empirical error and preserves the local geometrical structure of the data space .\nUsing techniques from experimental design , our algorithm finds the most informative images to label .\nThese labeled images and the unlabeled images in the database are used to learn a classifier .\nThe experimental results on Corel database show that both active learning and semi-supervised learning can significantly improve the retrieval performance .\nIn this paper , we consider the image retrieval problem on a small , static , and closed-domain image data .\nA much more challenging domain is the World Wide Web ( WWW ) .\nFor Web image search , it is possible to collect a large amount of user click information .\nThis information can be naturally used to construct the affinity graph in our algorithm .\nHowever , the computational complexity in Web scenario may become a crucial issue .\nAlso , although our primary interest in this paper is focused on relevance feedback image retrieval , our results may also be of interest to researchers in patten recognition and machine learning , especially when a large amount of data is available but only a limited samples can be labeled .", "lvl-4": "Laplacian Optimal Design for Imag e Retrieval\nABSTRACT\nRelevance feedback is a powerful technique to enhance ContentBased Image Retrieval ( CBIR ) performance .\nIt solicits the user 's relevance judgments on the retrieved images returned by the CBIR systems .\nThe user 's labeling is then used to learn a classifier to distinguish between relevant and irrelevant images .\nHowever , the top returned images may not be the most informative ones .\nThe challenge is thus to determine which unlabeled images would be the most informative ( i.e. , improve the classifier the most ) if they were labeled and used as training samples .\nIn this paper , we propose a novel active learning algorithm , called Laplacian Optimal Design ( LOD ) , for relevance feedback image retrieval .\nOur algorithm is based on a regression model which minimizes the least square error on the measured ( or , labeled ) images and simultaneously preserves the local geometrical structure of the image space .\nSpecifically , we assume that if two images are sufficiently close to each other , then their measurements ( or , labels ) are close as well .\nBy constructing a nearest neighbor graph , the geometrical structure of the image space can be described by the graph Laplacian .\nWe discuss how results from the field of optimal experimental design may be used to guide our selection of a subset of images , which gives us the most amount of information .\nExperimental results on Corel database suggest that the proposed approach achieves higher precision in relevance feedback image retrieval .\n1 .\nINTRODUCTION\nIn many machine learning and information retrieval tasks , there is no shortage of unlabeled data but labels are expensive .\nThe challenge is thus to determine which unlabeled samples would be the most informative ( i.e. , improve the classifier the most ) if they were labeled and used as training samples .\nThis problem is typically called active learning [ 4 ] .\nMany real world applications can be casted into active learning framework .\nParticularly , we consider the problem of relevance feedback driven Content-Based Image Retrieval ( CBIR ) [ 13 ] .\nContent-Based Image Retrieval has attracted substantial interests in the last decade [ 13 ] .\nIt is motivated by the fast growth of digital image databases which , in turn , require efficient search schemes .\nRather than describe an image using text , in these systems an image query is described using one or more example images .\nThe low level visual features ( color , texture , shape , etc. ) are automatically extracted to represent the images .\nTo narrow down the semantic gap , relevance feedback is introduced into CBIR [ 12 ] .\nIn many of the current relevance feedback driven CBIR systems , the user is required to provide his/her relevance judgments on the top images returned by the system .\nThe labeled images are then used to train a classifier to separate images that match the query concept from those that do not .\nHowever , in general the top returned images may not be the most informative ones .\nIn the worst case , all the top images labeled by the user may be positive and thus the standard classification techniques can not be applied due to the lack of negative examples .\nUnlike the standard classification problems where the labeled samples are pregiven , in relevance feedback image retrieval the system can actively select the images to label .\nThus active learning can be naturally introduced into image retrieval .\nDespite many existing active learning techniques , Support Vector Machine ( SVM ) active learning [ 14 ] and regression based active learning [ 1 ] have received the most interests .\nThe major disadvantage of SVM active learning is that the estimated boundary may not be accurate enough .\nMoreover , it may not be applied at the beginning of the retrieval when there is no labeled images .\nSome other SVM based active learning algorithms can be found in [ 7 ] , [ 9 ] .\nIn statistics , the problem of selecting samples to label is typically referred to as experimental design .\nThe sample x is referred to as experiment , and its label y is referred to as measurement .\nThe study of optimal experimental design ( OED ) [ 1 ] is concerned with the design of experiments that are expected to minimize variances of a parameterized model .\nThe intent of optimal experimental design is usually to maximize confidence in a given model , minimize parameter variances for system identification , or minimize the model 's output variance .\nClassical experimental design approaches include A-Optimal Design , D-Optimal Design , and E-Optimal Design .\nAll of these approaches are based on a least squares regression model .\nComparing to SVM based active learning algorithms , experimental design approaches are much more efficient in computation .\nHowever , this kind of approaches takes only measured ( or , labeled ) data into account in their objective function , while the unmeasured ( or , unlabeled ) data is ignored .\nBenefit from recent progresses on optimal experimental design and semi-supervised learning , in this paper we propose a novel active learning algorithm for image retrieval , called Laplacian Optimal Design ( LOD ) .\nUnlike traditional experimental design methods whose loss functions are only defined on the measured points , the loss function of our proposed LOD algorithm is defined on both measured and unmeasured points .\nSpecifically , we introduce a locality preserving regularizer into the standard least-square-error based loss function .\nThe new loss function aims to find a classifier which is locally as smooth as possible .\nIn other words , if two points are sufficiently close to each other in the input space , then they are expected to share the same label .\nOnce the loss function is defined , we can select the most informative data points which are presented to the user for labeling .\nIt would be important to note that the most informative images may not be the top returned images .\nThe rest of the paper is organized as follows .\nIn Section 2 , we provide a brief description of the related work .\nOur proposed Laplacian Optimal Design algorithm is introduced in Section 3 .\nIn Section 4 , we compare our algorithm with the state-or-the-art algorithms and present the experimental results on image retrieval .\nFinally , we provide some concluding remarks and suggestions for future work in Section 5 .\n2 .\nRELATED WORK\nSince our proposed algorithm is based on regression framework .\nThe most related work is optimal experimental design [ 1 ] , including A-Optimal Design , D-Optimal Design , and EOptimal Design .\nIn this Section , we give a brief description of these approaches .\n2.1 The Active Learning Problem\nThe generic problem of active learning is the following .\nIn other words , the points zi ( i = 1 , \u00b7 \u00b7 \u00b7 , k ) can improve the classifier the most if they are labeled and used as training points .\n2.2 Optimal Experimental Design\nWe consider a linear regression model\nDifferent observations have errors that are independent , but with equal variances \u03c32 .\nThus , the maximum likelihood estimate for the weight vector , \u02c6w , is that which minimizes the sum squared error\nThe three most common scalar measures of the size of the parameter covariance matrix in optimal experimental design\nare : \u2022 D-optimal design : determinant of Hsse .\n\u2022 A-optimal design : trace of Hsse .\n\u2022 E-optimal design : maximum eigenvalue of Hsse .\nSince the computation of the determinant and eigenvalues of a matrix is much more expensive than the computation of matrix trace , A-optimal design is more efficient than the other two .\nSome recent work on experimental design can be found in [ 6 ] , [ 16 ] .\n7 .\nCONCLUSIONS AND FUTURE WORK\nThis paper describes a novel active learning algorithm , called Laplacian Optimal Design , to enable more effective relevance feedback image retrieval .\nOur algorithm is based on an objective function which simultaneously minimizes the empirical error and preserves the local geometrical structure of the data space .\nUsing techniques from experimental design , our algorithm finds the most informative images to label .\nThese labeled images and the unlabeled images in the database are used to learn a classifier .\nThe experimental results on Corel database show that both active learning and semi-supervised learning can significantly improve the retrieval performance .\nIn this paper , we consider the image retrieval problem on a small , static , and closed-domain image data .\nFor Web image search , it is possible to collect a large amount of user click information .\nThis information can be naturally used to construct the affinity graph in our algorithm .", "lvl-2": "Laplacian Optimal Design for Imag e Retrieval\nABSTRACT\nRelevance feedback is a powerful technique to enhance ContentBased Image Retrieval ( CBIR ) performance .\nIt solicits the user 's relevance judgments on the retrieved images returned by the CBIR systems .\nThe user 's labeling is then used to learn a classifier to distinguish between relevant and irrelevant images .\nHowever , the top returned images may not be the most informative ones .\nThe challenge is thus to determine which unlabeled images would be the most informative ( i.e. , improve the classifier the most ) if they were labeled and used as training samples .\nIn this paper , we propose a novel active learning algorithm , called Laplacian Optimal Design ( LOD ) , for relevance feedback image retrieval .\nOur algorithm is based on a regression model which minimizes the least square error on the measured ( or , labeled ) images and simultaneously preserves the local geometrical structure of the image space .\nSpecifically , we assume that if two images are sufficiently close to each other , then their measurements ( or , labels ) are close as well .\nBy constructing a nearest neighbor graph , the geometrical structure of the image space can be described by the graph Laplacian .\nWe discuss how results from the field of optimal experimental design may be used to guide our selection of a subset of images , which gives us the most amount of information .\nExperimental results on Corel database suggest that the proposed approach achieves higher precision in relevance feedback image retrieval .\n1 .\nINTRODUCTION\nIn many machine learning and information retrieval tasks , there is no shortage of unlabeled data but labels are expensive .\nThe challenge is thus to determine which unlabeled samples would be the most informative ( i.e. , improve the classifier the most ) if they were labeled and used as training samples .\nThis problem is typically called active learning [ 4 ] .\nHere the task is to minimize an overall cost , which depends both on the classifier accuracy and the cost of data collection .\nMany real world applications can be casted into active learning framework .\nParticularly , we consider the problem of relevance feedback driven Content-Based Image Retrieval ( CBIR ) [ 13 ] .\nContent-Based Image Retrieval has attracted substantial interests in the last decade [ 13 ] .\nIt is motivated by the fast growth of digital image databases which , in turn , require efficient search schemes .\nRather than describe an image using text , in these systems an image query is described using one or more example images .\nThe low level visual features ( color , texture , shape , etc. ) are automatically extracted to represent the images .\nHowever , the low level features may not accurately characterize the high level semantic concepts .\nTo narrow down the semantic gap , relevance feedback is introduced into CBIR [ 12 ] .\nIn many of the current relevance feedback driven CBIR systems , the user is required to provide his/her relevance judgments on the top images returned by the system .\nThe labeled images are then used to train a classifier to separate images that match the query concept from those that do not .\nHowever , in general the top returned images may not be the most informative ones .\nIn the worst case , all the top images labeled by the user may be positive and thus the standard classification techniques can not be applied due to the lack of negative examples .\nUnlike the standard classification problems where the labeled samples are pregiven , in relevance feedback image retrieval the system can actively select the images to label .\nThus active learning can be naturally introduced into image retrieval .\nDespite many existing active learning techniques , Support Vector Machine ( SVM ) active learning [ 14 ] and regression based active learning [ 1 ] have received the most interests .\nBased on the observation that the closer to the SVM boundary an image is , the less reliable its classification is , SVM active learning selects those unlabeled images closest to the boundary to solicit user feedback so as to achieve maximal refinement on the hyperplane between the two classes .\nThe major disadvantage of SVM active learning is that the estimated boundary may not be accurate enough .\nMoreover , it may not be applied at the beginning of the retrieval when there is no labeled images .\nSome other SVM based active learning algorithms can be found in [ 7 ] , [ 9 ] .\nIn statistics , the problem of selecting samples to label is typically referred to as experimental design .\nThe sample x is referred to as experiment , and its label y is referred to as measurement .\nThe study of optimal experimental design ( OED ) [ 1 ] is concerned with the design of experiments that are expected to minimize variances of a parameterized model .\nThe intent of optimal experimental design is usually to maximize confidence in a given model , minimize parameter variances for system identification , or minimize the model 's output variance .\nClassical experimental design approaches include A-Optimal Design , D-Optimal Design , and E-Optimal Design .\nAll of these approaches are based on a least squares regression model .\nComparing to SVM based active learning algorithms , experimental design approaches are much more efficient in computation .\nHowever , this kind of approaches takes only measured ( or , labeled ) data into account in their objective function , while the unmeasured ( or , unlabeled ) data is ignored .\nBenefit from recent progresses on optimal experimental design and semi-supervised learning , in this paper we propose a novel active learning algorithm for image retrieval , called Laplacian Optimal Design ( LOD ) .\nUnlike traditional experimental design methods whose loss functions are only defined on the measured points , the loss function of our proposed LOD algorithm is defined on both measured and unmeasured points .\nSpecifically , we introduce a locality preserving regularizer into the standard least-square-error based loss function .\nThe new loss function aims to find a classifier which is locally as smooth as possible .\nIn other words , if two points are sufficiently close to each other in the input space , then they are expected to share the same label .\nOnce the loss function is defined , we can select the most informative data points which are presented to the user for labeling .\nIt would be important to note that the most informative images may not be the top returned images .\nThe rest of the paper is organized as follows .\nIn Section 2 , we provide a brief description of the related work .\nOur proposed Laplacian Optimal Design algorithm is introduced in Section 3 .\nIn Section 4 , we compare our algorithm with the state-or-the-art algorithms and present the experimental results on image retrieval .\nFinally , we provide some concluding remarks and suggestions for future work in Section 5 .\n2 .\nRELATED WORK\nSince our proposed algorithm is based on regression framework .\nThe most related work is optimal experimental design [ 1 ] , including A-Optimal Design , D-Optimal Design , and EOptimal Design .\nIn this Section , we give a brief description of these approaches .\n2.1 The Active Learning Problem\nThe generic problem of active learning is the following .\nGiven a set of points A = { x1 , x2 , \u00b7 \u00b7 \u00b7 , xm } in Rd , find a subset B = { z1 , z2 , \u00b7 \u00b7 \u00b7 , zk } C A which contains the most informative points .\nIn other words , the points zi ( i = 1 , \u00b7 \u00b7 \u00b7 , k ) can improve the classifier the most if they are labeled and used as training points .\n2.2 Optimal Experimental Design\nWe consider a linear regression model\nwhere y is the observation , x is the independent variable , w is the weight vector and ~ is an unknown error with zero mean .\nDifferent observations have errors that are independent , but with equal variances \u03c32 .\nWe define f ( x ) = wT x to be the learner 's output given input x and the weight vector w. Suppose we have a set of labeled sample points ( z1 , y1 ) , \u00b7 \u00b7 \u00b7 , ( zk , yk ) , where yi is the label of zi .\nThus , the maximum likelihood estimate for the weight vector , \u02c6w , is that which minimizes the sum squared error\nBy Gauss-Markov theorem , we know that w\u02c6 \u2212 w has a zero mean and a covariance matrix given by \u03c32H \u2212 1 sse , where Hsse is the Hessian of Jsse ( w )\nwhere Z = ( z1 , z2 , \u00b7 \u00b7 \u00b7 , zk ) .\nThe three most common scalar measures of the size of the parameter covariance matrix in optimal experimental design\nare : \u2022 D-optimal design : determinant of Hsse .\n\u2022 A-optimal design : trace of Hsse .\n\u2022 E-optimal design : maximum eigenvalue of Hsse .\nSince the computation of the determinant and eigenvalues of a matrix is much more expensive than the computation of matrix trace , A-optimal design is more efficient than the other two .\nSome recent work on experimental design can be found in [ 6 ] , [ 16 ] .\n3 .\nLAPLACIAN OPTIMAL DESIGN\nSince the covariance matrix Hsse used in traditional approaches is only dependent on the measured samples , i.e. zi 's , these approaches fail to evaluate the expected errors on the unmeasured samples .\nIn this Section , we introduce a novel active learning algorithm called Laplacian Optimal Design ( LOD ) which makes efficient use of both measured ( labeled ) and unmeasured ( unlabeled ) samples .\n3.1 The Objective Function\nIn many machine learning problems , it is natural to assume that if two points xi , xj are sufficiently close to each other , then their measurements ( f ( xi ) , f ( xj ) ) are close as\nwell .\nLet S be a similarity matrix .\nThus , a new loss function which respects the geometrical structure of the data space can be defined as follows : where yi is the measurement ( or , label ) of zi .\nNote that , the loss function ( 3 ) is essentially the same as the one used in Laplacian Regularized Regression ( LRR , [ 2 ] ) .\nHowever , LRR is a passive learning algorithm where the training data is given .\nIn this paper , we are focused on how to select the most informative data for training .\nThe loss function with our choice of symmetric weights Sij ( Sij = Sji ) incurs a heavy penalty if neighboring points xi and xj are mapped far apart .\nTherefore , minimizing J0 ( w ) is an attempt to ensure that if xi and xj are close then f ( xi ) and f ( xj ) are close as well .\nThere are many choices of the similarity matrix S .\nA simple definition is as follows :\nLet D be a diagonal matrix , Dii = ~ j Sij , and L = D \u2212 S .\nThe matrix L is called graph Laplacian in spectral graph theory [ 3 ] .\nLet y = ( y1 , \u00b7 \u00b7 \u00b7 , yk ) T and X = ( x1 , \u00b7 \u00b7 \u00b7 , xm ) .\nFollowing some simple algebraic steps , we see that :\nwhere I is an identity matrix and \u039b = \u03bb1XLXT + \u03bb2I .\nClearly , H is of full rank .\nRequiring that the gradient of J ( w ) with respect to w vanish gives the optimal estimate \u02c6w :\nThe following proposition states the bias and variance properties of the estimator for the coefficient vector w.\nFor any x , let y\u02c6 = \u02c6wT x be its predicted observation .\nThe expected squared prediction error is E ( y \u2212 \u02c6y ) 2\nIn some cases , the matrix ZZT + \u03bbXLXT is singular ( e.g. if m < d ) .\nThus , there is no stable solution to the optimization problem Eq .\n( 3 ) .\nA common way to deal with this ill-posed problem is to introduce a Tikhonov regularizer into our loss function : J ( w ) Clearly the expected square prediction error depends on the explanatory variable x , therefore average expected square predictive error over the complete data set A is\nOur Laplacian optimality criterion is thus formulated by minimizing the trace of XT H \u2212 1X .\nwhere z1 , \u00b7 \u00b7 \u00b7 , zk are selected from { x1 , \u00b7 \u00b7 \u00b7 , xm } .\n4 .\nKERNEL LAPLACIAN OPTIMAL DESIGN\nCanonical experimental design approaches ( e.g. A-Optimal Design , D-Optimal Design , and E-Optimal ) only consider linear functions .\nThey fail to discover the intrinsic geometry in the data when the data space is highly nonlinear .\nIn this section , we describe how to perform Laplacian Experimental Design in Reproducing Kernel Hilbert Space ( RKHS ) which gives rise to Kernel Laplacian Experimental Design ( KLOD ) .\nFor given data points x1 , \u00b7 \u00b7 \u00b7 , xm \u2208 X with a positive definite mercer kernel K : X \u00d7 X \u2192 R , there exists a unique RKHS HK of real valued functions on X. Let Kt ( s ) be the function of s obtained by fixing t and letting Kt ( s ) .\n= K ( s , t ) .\nHK consists of all finite linear combinations of the form ~ li = 1 \u03b1iKti with ti \u2208 X and limits of such functions as the ti become dense in X .\nWe have ~ Ks , Kt ~ HK = K ( s , t ) .\n4.1 Derivation of LOD in Reproducing Kernel Hilbert Space\nConsider the optimization problem ( 5 ) in RKHS .\nThus , we seek a function f \u2208 HK such that the following objective function is minimized :\nWe have the following proposition .\nPROOF .\nLet H \u22a5 be the orthogonal complement of H , i.e. HK = H \u2295 H \u22a5 .\nThus , for any function f \u2208 HK , it has orthogonal decomposition as follows :\nNotice that Kxi \u2208 H while fH \u22a5 \u2208 H \u22a5 .\nThis implies that ~ fH \u22a5 , Kxi ~ HK = 0 .\nTherefore , f ( xi ) = ~ fH , Kxi ~ HK = fH ( xi ) This completes the proof .\nProposition 4.1 tells us the minimizer of problem ( 12 ) admits a representation f \u2217 = ~ mi = 1 \u03b1iK ( \u00b7 , xi ) .\nPlease see [ 2 ] for the details .\nLet \u03c6 : Rd \u2192 H be a feature map from the input space Rd to H , and K ( xi , xj ) = < \u03c6 ( xi ) , \u03c6 ( xj ) > .\nLet X denote the data matrix in RKHS , X = ( \u03c6 ( x1 ) , \u03c6 ( x2 ) , \u00b7 \u00b7 \u00b7 , \u03c6 ( xm ) ) .\nSimilarly , we define Z = ( \u03c6 ( z1 ) , \u03c6 ( z2 ) , \u00b7 \u00b7 \u00b7 , \u03c6 ( zk ) ) .\nThus , the optimization problem in RKHS can be written as follows :\nSince the mapping function \u03c6 is generally unknown , there is no direct way to solve problem ( 13 ) .\nIn the following , we apply kernel tricks to solve this optimization problem .\nLet X \u2212 1 be the Moore-Penrose inverse ( also known as pseudo inverse ) of X. Thus , we have :\nwhere KXX is a m \u00d7 m matrix ( KXX , ij = K ( xi , xj ) ) , KXZ is a m \u00d7 k matrix ( KXZ , ij = K ( xi , zj ) ) , and KZX is a k \u00d7 m matrix ( KZX , ij = K ( zi , xj ) ) .\nThus , the Kernel Laplacian Optimal Design can be defined as follows :\n4.2 Optimization Scheme\nIn this subsection , we discuss how to solve the optimization problems ( 11 ) and ( 14 ) .\nParticularly , if we select a linear kernel for KLOD , then it reduces to LOD .\nTherefore , we will focus on problem ( 14 ) in the following .\nIt can be shown that the optimization problem ( 14 ) is NP-hard .\nIn this subsection , we develop a simple sequential greedy approach to solve ( 14 ) .\nSuppose n points have been selected , denoted by a matrix Zn = ( z1 , \u00b7 \u00b7 \u00b7 , zn ) .\nThe ( n + 1 ) - th point zn +1 can be selected by solving the following optimization problem :\nIf the kernel function is chosen as inner product K ( x , y ) = ( x , y ) , then WK is a linear functional space and the algorithm reduces to LOD .\n5 .\nCONTENT-BASED IMAGE RETRIEVAL USING LAPLACIAN OPTIMAL DESIGN\nIn this section , we describe how to apply Laplacian Optimal Design to CBIR .\nWe begin with a brief description of image representation using low level visual features .\n5.1 Low-Level Image Representation\nLow-level image representation is a crucial problem in CBIR .\nGeneral visual features includes color , texture , shape , etc. .\nColor and texture features are the most extensively used visual features in CBIR .\nCompared with color and texture features , shape features are usually described after images have been segmented into regions or objects .\nSince robust and accurate image segmentation is difficult to achieve , the use of shape features for image retrieval has been limited to special applications where objects or regions are readily available .\nIn this work , We combine 64-dimensional color histogram and 64-dimensional Color Texture Moment ( CTM , [ 15 ] ) to represent the images .\nThe color histogram is calculated using 4 x 4 x 4 bins in HSV space .\nThe Color Texture Moment is proposed by Yu et al. [ 15 ] , which integrates the color and texture characteristics of the image in a compact form .\nCTM adopts local Fourier transform as a texture representation scheme and derives eight characteristic maps to describe different aspects of co-occurrence relations of image pixels in each channel of the ( SVcosH , SVsinH , V ) color space .\nThen CTM calculates the first and second moment of these maps as a representation of the natural color image pixel distribution .\nPlease see [ 15 ] for details .\n5.2 Relevance Feedback Image Retrieval\nRelevance feedback is one of the most important techniques to narrow down the gap between low level visual features and high level semantic concepts [ 12 ] .\nTraditionally , the user 's relevance feedbacks are used to update the query vector or adjust the weighting of different dimensions .\nThis process can be viewed as an on-line learning process in which the image retrieval system acts as a learner and the user acts as a teacher .\nThey typical retrieval process is outlined as follows :\n1 .\nThe user submits a query image example to the system .\nThe system ranks the images in database according to some pre-defined distance metric and presents to the user the top ranked images .\n2 .\nThe system selects some images from the database and request the user to label them as `` relevant '' or `` irrelevant '' .\n3 .\nThe system uses the user 's provided information to rerank the images in database and returns to the user the top images .\nGo to step 2 until the user is satisfied .\nOur Laplacian Optimal Design algorithm is applied in the second step for selecting the most informative images .\nOnce we get the labels for the images selected by LOD , we apply Laplacian Regularized Regression ( LRR , [ 2 ] ) to solve the optimization problem ( 3 ) and build the classifier .\nThe classifier is then used to re-rank the images in database .\nNote that , in order to reduce the computational complexity , we do not use all the unlabeled images in the database but only those within top 500 returns of previous iteration .\n6 .\nEXPERIMENTAL RESULTS\nIn this section , we evaluate the performance of our proposed algorithm on a large image database .\nTo demonstrate the effectiveness of our proposed LOD algorithm , we compare it with Laplacian Regularized Regression ( LRR , [ 2 ] ) , Support Vector Machine ( SVM ) , Support Vector Machine Active Learning ( SVMactive ) [ 14 ] , and A-Optimal Design ( AOD ) .\nBoth SVMactive , AOD , and LOD are active learning algorithms , while LRR and SVM are standard classification algorithms .\nSVM only makes use of the labeled images , while LRR is a semi-supervised learning algorithm which makes use of both labeled and unlabeled images .\nFor SVMactive , AOD , and LOD , 10 training images are selected by the algorithms themselves at each iteration .\nWhile for LRR and SVM , we use the top 10 images as training data .\nIt would be important to note that SVMactive is based on the ordinary SVM , LOD is based on LRR , and AOD is based on the ordinary regression .\nThe parameters \u03bb1 and \u03bb2 in our LOD algorithm are empirically set to be 0.001 and 0.00001 .\nFor both LRR and LOD algorithms , we use the same graph structure ( see Eq .\n4 ) and set the value of p ( number of nearest neighbors ) to be 5 .\nWe begin with a simple synthetic example to give some intuition about how LOD works .\n6.1 Simple Synthetic Example\nA simple synthetic example is given in Figure 1 .\nThe data set contains two circles .\nEight points are selected by AOD and LOD .\nAs can be seen , all the points selected by AOD are from the big circle , while LOD selects four points from the big circle and four from the small circle .\nThe numbers beside the selected points denote their orders to be selected .\nClearly , the points selected by our LOD algorithm can better represent the original data set .\nWe did not compare our algorithm with SVMactive because SVMactive can not be applied in this case due to the lack of the labeled points .\n6.2 Image Retrieval Experimental Design\nThe image database we used consists of 7,900 images of 79 semantic categories , from COREL data set .\nIt is a large and heterogeneous image set .\nEach image is represented as a 128-dimensional vector as described in Section 5.1 .\nFigure 2 shows some sample images .\nTo exhibit the advantages of using our algorithm , we need a reliable way of evaluating the retrieval performance and the comparisons with other algorithms .\nWe list different aspects of the experimental design below .\n6.2.1 Evaluation Metrics\nWe use precision-scope curve and precision rate [ 10 ] to evaluate the effectiveness of the image retrieval algorithms .\nThe scope is specified by the number ( N ) of top-ranked images presented to the user .\nThe precision is the ratio of the number of relevant images presented to the user to the\nFigure 1 : Data selection by active learning algorithms .\nThe numbers beside the selected points denote their orders to be selected .\nClearly , the points selected by our LOD algorithm can better represent the original data set .\nNote that , the SVMactive algorithm can not be applied in this case due to the lack of labeled points .\nFigure 2 : Sample images from category bead , elephant , and ship .\nscope N .\nThe precision-scope curve describes the precision with various scopes and thus gives an overall performance evaluation of the algorithms .\nOn the other hand , the precision rate emphasizes the precision at a particular value of scope .\nIn general , it is appropriate to present 20 images on a screen .\nPutting more images on a screen may affect the quality of the presented images .\nTherefore , the precision at top 20 ( N = 20 ) is especially important .\nIn real world image retrieval systems , the query image is usually not in the image database .\nTo simulate such environment , we use five-fold cross validation to evaluate the algorithms .\nMore precisely , we divide the whole image database into five subsets with equal size .\nThus , there are 20 images per category in each subset .\nAt each run of cross validation , one subset is selected as the query set , and the other four subsets are used as the database for retrieval .\nThe precisionscope curve and precision rate are computed by averaging the results from the five-fold cross validation .\n6.2.2 Automatic Relevance Feedback Scheme\nWe designed an automatic feedback scheme to model the retrieval process .\nFor each submitted query , our system retrieves and ranks the images in the database .\n10 images were selected from the database for user labeling and the label information is used by the system for re-ranking .\nNote that , the images which have been selected at previous iterations are excluded from later selections .\nFor each query , the automatic relevance feedback mechanism is performed for four iterations .\nIt is important to note that the automatic relevance feedback scheme used here is different from the ones described in [ 8 ] , [ 11 ] .\nIn [ 8 ] , [ 11 ] , the top four relevant and irrelevant images were selected as the feedback images .\nHowever , this may not be practical .\nIn real world image retrieval systems , it is possible that most of the top-ranked images are relevant ( or , irrelevant ) .\nThus , it is difficult for the user to find both four relevant and irrelevant images .\nIt is more reasonable for the users to provide feedback information only on the 10 images selected by the system .\n6.3 Image Retrieval Performance\nIn real world , it is not practical to require the user to provide many rounds of feedbacks .\nThe retrieval performance after the first two rounds of feedbacks ( especially the first round ) is more important .\nFigure 3 shows the average precision-scope curves of the different algorithms for the first two feedback iterations .\nAt the beginning of retrieval , the Euclidean distances in the original 128-dimensional space are used to rank the images in database .\nAfter the user provides relevance feedbacks , the LRR , SVM , SVMactive , AOD , and LOD algorithms are then applied to re-rank the images .\nIn order to reduce the time complexity of active learning algorithms , we did n't select the most informative images from the whole database but from the top 500 images .\nFor LRR and SVM , the user is required to label the top 10 images .\nFor SVMactive , AOD , and LOD , the user is required to label 10 most informative images selected by these algorithms .\nNote that , SVMactive can only be ap\nFigure 3 : The average precision-scope curves of different algorithms for the first two feedback iterations .\nThe LOD algorithm performs the best on the entire scope .\nNote that , at the first round of feedback , the SVMactive algorithm can not be applied .\nIt applies the ordinary SVM to build the initial classifier .\nFigure 4 : Performance evaluation of the five learning algorithms for relevance feedback image retrieval .\n( a ) Precision at top 10 , ( b ) Precision at top 20 , and ( c ) Precision at top 30 .\nAs can be seen , our LOD algorithm consistently outperforms the other four algorithms .\nplied when the classifier is already built .\nTherefore , it can not be applied at the first round and we use the standard SVM to build the initial classifier .\nAs can be seen , our LOD algorithm outperforms the other four algorithms on the entire scope .\nAlso , the LRR algorithm performs better than SVM .\nThis is because that the LRR algorithm makes efficient use of the unlabeled images by incorporating a locality preserving regularizer into the ordinary regression objective function .\nThe AOD algorithm performs the worst .\nAs the scope gets larger , the performance difference between these algorithms gets smaller .\nBy iteratively adding the user 's feedbacks , the corresponding precision results ( at top 10 , top 20 , and top 30 ) of the five algorithms are respectively shown in Figure 4 .\nAs can be seen , our LOD algorithm performs the best in all the cases and the LRR algorithm performs the second best .\nBoth of these two algorithms make use of the unlabeled images .\nThis shows that the unlabeled images are helpful for discovering the intrinsic geometrical structure of the image space and therefore enhance the retrieval performance .\nIn real world , the user may not be willing to provide too many relevance feedbacks .\nTherefore , the retrieval performance at the first two rounds are especially important .\nAs can be seen , our LOD algorithm achieves 6.8 % performance improvement for top 10 results , 5.2 % for top 20 results , and 4.1 % for top 30 results , comparing to the second best algorithm ( LRR ) after the first two rounds of relevance feedbacks .\n6.4 Discussion\nSeveral experiments on Corel database have been systematically performed .\nWe would like to highlight several interesting points : 1 .\nIt is clear that the use of active learning is beneficial in the image retrieval domain .\nThere is a significant increase in performance from using the active learning methods .\nEspecially , out of the three active learning methods ( SVMactive , AOD , LOD ) , our proposed LOD algorithm performs the best .\n2 .\nIn many real world applications like relevance feedback image retrieval , there are generally two ways of reducing labor-intensive manual labeling task .\nOne is active learning which selects the most informative samples to label , and the other is semi-supervised learning which makes use of the unlabeled samples to enhance the learning performance .\nBoth of these two strategies have been studied extensively in the past [ 14 ] ,\n[ 7 ] , [ 5 ] , [ 8 ] .\nThe work presented in this paper is focused on active learning , but it also takes advantage of the recent progresses on semi-supervised learning [ 2 ] .\nSpecifically , we incorporate a locality preserving regularizer into the standard regression framework and find the most informative samples with respect to the new objective function .\nIn this way , the active learning and semi-supervised learning techniques are seamlessly unified for learning an optimal classifier .\n3 .\nThe relevance feedback technique is crucial to image retrieval .\nFor all the five algorithms , the retrieval performance improves with more feedbacks provided by the user .\n7 .\nCONCLUSIONS AND FUTURE WORK\nThis paper describes a novel active learning algorithm , called Laplacian Optimal Design , to enable more effective relevance feedback image retrieval .\nOur algorithm is based on an objective function which simultaneously minimizes the empirical error and preserves the local geometrical structure of the data space .\nUsing techniques from experimental design , our algorithm finds the most informative images to label .\nThese labeled images and the unlabeled images in the database are used to learn a classifier .\nThe experimental results on Corel database show that both active learning and semi-supervised learning can significantly improve the retrieval performance .\nIn this paper , we consider the image retrieval problem on a small , static , and closed-domain image data .\nA much more challenging domain is the World Wide Web ( WWW ) .\nFor Web image search , it is possible to collect a large amount of user click information .\nThis information can be naturally used to construct the affinity graph in our algorithm .\nHowever , the computational complexity in Web scenario may become a crucial issue .\nAlso , although our primary interest in this paper is focused on relevance feedback image retrieval , our results may also be of interest to researchers in patten recognition and machine learning , especially when a large amount of data is available but only a limited samples can be labeled ."} {"id": "J-14", "title": "", "abstract": "", "keyphrases": ["graphic game", "nash equilibrium", "approxim scheme", "exponenti-time algorithm", "approxim", "variou sociallydesir properti", "overal payoff", "distribut profit", "social welfar", "integ-payoff graphic game g", "sever drawback", "strategi profil", "degre-bound graph"], "prmu": [], "lvl-1": "Computing Good Nash Equilibria in Graphical Games \u2217 Edith Elkind Hebrew University of Jerusalem, Israel, and University of Southampton, Southampton, SO17 1BJ, U.K. Leslie Ann Goldberg University of Liverpool Liverpool L69 3BX, U.K. Paul Goldberg University of Liverpool Liverpool L69 3BX, U.K. ABSTRACT This paper addresses the problem of fair equilibrium selection in graphical games.\nOur approach is based on the data structure called the best response policy, which was proposed by Kearns et al. [13] as a way to represent all Nash equilibria of a graphical game.\nIn [9], it was shown that the best response policy has polynomial size as long as the underlying graph is a path.\nIn this paper, we show that if the underlying graph is a bounded-degree tree and the best response policy has polynomial size then there is an efficient algorithm which constructs a Nash equilibrium that guarantees certain payoffs to all participants.\nAnother attractive solution concept is a Nash equilibrium that maximizes the social welfare.\nWe show that, while exactly computing the latter is infeasible (we prove that solving this problem may involve algebraic numbers of an arbitrarily high degree), there exists an FPTAS for finding such an equilibrium as long as the best response policy has polynomial size.\nThese two algorithms can be combined to produce Nash equilibria that satisfy various fairness criteria.\nCategories and Subject Descriptors F.2 [Theory of Computation]: Analysis of Algorithms and Problem Complexity; J.4 [Computer Applications]: Social and Behavioral Sciences-economics General Terms Algorithms, Economics, Theory 1.\nINTRODUCTION In a large community of agents, an agent``s behavior is not likely to have a direct effect on most other agents: rather, it is just the agents who are close enough to him that will be affected.\nHowever, as these agents respond by adapting their behavior, more agents will feel the consequences and eventually the choices made by a single agent will propagate throughout the entire community.\nThis is the intuition behind graphical games, which were introduced by Kearns, Littman and Singh in [13] as a compact representation scheme for games with many players.\nIn an n-player graphical game, each player is associated with a vertex of an underlying graph G, and the payoffs of each player depend on his action as well as on the actions of his neighbors in the graph.\nIf the maximum degree of G is \u0394, and each player has two actions available to him, then the game can be represented using n2\u0394+1 numbers.\nIn contrast, we need n2n numbers to represent a general n-player 2-action game, which is only practical for small values of n. For graphical games with constant \u0394, the size of the game is linear in n.\nOne of the most natural problems for a graphical game is that of finding a Nash equilibrium, the existence of which follows from Nash``s celebrated theorem (as graphical games are just a special case of n-player games).\nThe first attempt to tackle this problem was made in [13], where the authors consider graphical games with two actions per player in which the underlying graph is a boundeddegree tree.\nThey propose a generic algorithm for finding Nash equilibria that can be specialized in two ways: an exponential-time algorithm for finding an (exact) Nash equilibrium, and a fully polynomial time approximation scheme (FPTAS) for finding an approximation to a Nash equilibrium.\nFor any > 0 this algorithm outputs an -Nash equilibrium, which is a strategy profile in which no player can improve his payoff by more than by unilaterally changing his strategy.\nWhile -Nash equilibria are often easier to compute than exact Nash equilibria, this solution concept has several drawbacks.\nFirst, the players may be sensitive to a small loss in payoffs, so the strategy profile that is an -Nash equilibrium will not be stable.\nThis will be the case even if there is only a small subset of players who are extremely price-sensitive, and for a large population of players it may be difficult to choose a value of that will satisfy everyone.\nSecond, the strategy profiles that are close to being Nash equilibria may be much better with respect to the properties under consideration than exact Nash equilibria.\nTherefore, the (approximation to the) value of the best solution that corresponds to an -Nash equilibrium may not be indicative of what can be achieved under an exact Nash equilibrium.\nThis is especially important if the purpose of the approximate solution is to provide a good benchmark for a system of selfish agents, as the benchmark implied by an -Nash equilibrium may be unrealistic.\nFor these reasons, in this paper we focus on the problem of computing exact Nash equilibria.\nBuilding on ideas of [14], Elkind et al. [9] showed how to find an (exact) Nash equilibrium in polynomial time when the underlying 162 graph has degree 2 (that is, when the graph is a collection of paths and cycles).\nBy contrast, finding a Nash equilibrium in a general degree-bounded graph appears to be computationally intractable: it has been shown (see [5, 12, 7]) to be complete for the complexity class PPAD.\n[9] extends this hardness result to the case in which the underlying graph has bounded pathwidth.\nA graphical game may not have a unique Nash equilibrium, indeed it may have exponentially many.\nMoreover, some Nash equilibria are more desirable than others.\nRather than having an algorithm which merely finds some Nash equilibrium, we would like to have algorithms for finding Nash equilibria with various sociallydesirable properties, such as maximizing overall payoff or distributing profit fairly.\nA useful property of the data structure of [13] is that it simultaneously represents the set of all Nash equilibria of the underlying game.\nIf this representation has polynomial size (as is the case for paths, as shown in [9]), one may hope to extract from it a Nash equilibrium with the desired properties.\nIn fact, in [13] the authors mention that this is indeed possible if one is interested in finding an (approximate) -Nash equilibrium.\nThe goal of this paper is to extend this to exact Nash equilibria.\n1.1 Our Results In this paper, we study n-player 2-action graphical games on bounded-degree trees for which the data structure of [13] has size poly(n).\nWe focus on the problem of finding exact Nash equilibria with certain socially-desirable properties.\nIn particular, we show how to find a Nash equilibrium that (nearly) maximizes the social welfare, i.e., the sum of the players'' payoffs, and we show how to find a Nash equilibrium that (nearly) satisfies prescribed payoff bounds for all players.\nGraphical games on bounded-degree trees have a simple algebraic structure.\nOne attractive feature, which follows from [13], is that every such game has a Nash equilibrium in which the strategy of every player is a rational number.\nSection 3 studies the algebraic structure of those Nash equilibria that maximize social welfare.\nWe show (Theorems 1 and 2) that, surprisingly, the set of Nash equilibria that maximize social welfare is more complex.\nIn fact, for any algebraic number \u03b1 \u2208 [0, 1] with degree at most n, we exhibit a graphical game on a path of length O(n) such that, in the unique social welfare-maximizing Nash equilibrium of this game, one of the players plays the mixed strategy \u03b1.1 This result shows that it may be difficult to represent an optimal Nash equilibrium.\nIt seems to be a novel feature of the setting we consider here, that an optimal Nash equilibrium is hard to represent, in a situation where it is easy to find and represent a Nash equilibrium.\nAs the social welfare-maximizing Nash equilibrium may be hard to represent efficiently, we have to settle for an approximation.\nHowever, the crucial difference between our approach and that of previous papers [13, 16, 19] is that we require our algorithm to output an exact Nash equilibrium, though not necessarily the optimal one with respect to our criteria.\nIn Section 4, we describe an algorithm that satisfies this requirement.\nNamely, we propose an algorithm that for any > 0 finds a Nash equilibrium whose total payoff is within of optimal.\nIt runs in polynomial time (Theorem 3,4) for any graphical game on a bounded-degree tree for which the data structure proposed by [13] (the so-called best response policy, defined below) is of size poly(n) (note that, as shown in [9], this is always the case when the underlying graph is a path).\nMore pre1 A related result in a different context was obtained by Datta [8], who shows that n-player 2-action games are universal in the sense that any real algebraic variety can be represented as the set of totally mixed Nash equilibria of such games.\ncisely, the running time of our algorithm is polynomial in n, Pmax, and 1/ , where Pmax is the maximum absolute value of an entry of a payoff matrix, i.e., it is a pseudopolynomial algorithm, though it is fully polynomial with respect to .\nWe show (Section 4.1) that under some restrictions on the payoff matrices, the algorithm can be transformed into a (truly) polynomial-time algorithm that outputs a Nash equilibrium whose total payoff is within a 1 \u2212 factor from the optimal.\nIn Section 5, we consider the problem of finding a Nash equilibrium in which the expected payoff of each player Vi exceeds a prescribed threshold Ti.\nUsing the idea from Section 4 we give (Theorem 5) a fully polynomial time approximation scheme for this problem.\nThe running time of the algorithm is bounded by a polynomial in n, Pmax, and .\nIf the instance has a Nash equilibrium satisfying the prescribed thresholds then the algorithm constructs a Nash equilibrium in which the expected payoff of each player Vi is at least Ti \u2212 .\nIn Section 6, we introduce other natural criteria for selecting a good Nash equilibrium and we show that the algorithms described in the two previous sections can be used as building blocks in finding Nash equilibria that satisfy these criteria.\nIn particular, in Section 6.1 we show how to find a Nash equilibrium that approximates the maximum social welfare, while guaranteeing that each individual payoff is close to a prescribed threshold.\nIn Section 6.2 we show how to find a Nash equilibrium that (nearly) maximizes the minimum individual payoff.\nFinally, in Section 6.3 we show how to find a Nash equilibrium in which the individual payoffs of the players are close to each other.\n1.2 Related Work Our approximation scheme (Theorem 3 and Theorem 4) shows a contrast between the games that we study and two-player n-action games, for which the corresponding problems are usually intractable.\nFor two-player n-action games, the problem of finding Nash equilibria with special properties is typically NP-hard.\nIn particular, this is the case for Nash equilibria that maximize the social welfare [11, 6].\nMoreover, it is likely to be intractable even to approximate such equilibria.\nIn particular, Chen, Deng and Teng [4] show that there exists some , inverse polynomial in n, for which computing an -Nash equilibrium in 2-player games with n actions per player is PPAD-complete.\nLipton and Markakis [15] study the algebraic properties of Nash equilibria, and point out that standard quantifier elimination algorithms can be used to solve them.\nNote that these algorithms are not polynomial-time in general.\nThe games we study in this paper have polynomial-time computable Nash equilibria in which all mixed strategies are rational numbers, but an optimal Nash equilibrium may necessarily include mixed strategies with high algebraic degree.\nA correlated equilibrium (CE) (introduced by Aumann [2]) is a distribution over vectors of players'' actions with the property that if any player is told his own action (the value of his own component) from a vector generated by that distribution, then he cannot increase his expected payoff by changing his action.\nAny Nash equilibrium is a CE but the converse does not hold in general.\nIn contrast with Nash equilibria, correlated equilibria can be found for low-degree graphical games (as well as other classes of conciselyrepresented multiplayer games) in polynomial time [17].\nBut, for graphical games it is NP-hard to find a correlated equilibrium that maximizes total payoff [18].\nHowever, the NP-hardness results apply to more general games than the one we consider here, in particular the graphs are not trees.\nFrom [2] it is also known that there exist 2-player, 2-action games for which the expected total payoff 163 of the best correlated equilibrium is higher than the best Nash equilibrium, and we discuss this issue further in Section 7.\n2.\nPRELIMINARIES AND NOTATION We consider graphical games in which the underlying graph G is an n-vertex tree, in which each vertex has at most \u0394 children.\nEach vertex has two actions, which are denoted by 0 and 1.\nA mixed strategy of a player V is represented as a single number v \u2208 [0, 1], which denotes the probability that V selects action 1.\nFor the purposes of the algorithm, the tree is rooted arbitrarily.\nFor convenience, we assume without loss of generality that the root has a single child, and that its payoff is independent of the action chosen by the child.\nThis can be achieved by first choosing an arbitrary root of the tree, and then adding a dummy parent of this root, giving the new parent a constant payoff function, e.g., 0.\nGiven an edge (V, W ) of the tree G, and a mixed strategy w for W , let G(V,W ),W =w be the instance obtained from G by (1) deleting all nodes Z which are separated from V by W (i.e., all nodes Z such that the path from Z to V passes through W ), and (2) restricting the instance so that W is required to play mixed strategy w. Definition 1.\nSuppose that (V, W ) is an edge of the tree, that v is a mixed strategy for V and that w is a mixed strategy for W .\nWe say that v is a potential best response to w (denoted by v \u2208 pbrV (w)) if there is an equilibrium in the instance G(V,W ),W =w in which V has mixed strategy v.\nWe define the best response policy for V , given W , as B(W, V ) = {(w, v) | v \u2208 pbrV (w), w \u2208 [0, 1]}.\nThe upstream pass of the generic algorithm of [13] considers every node V (other than the root) and computes the best response policy for V given its parent.\nWith the above assumptions about the root, the downstream pass is straightforward.\nThe root selects a mixed strategy w for the root W and a mixed strategy v \u2208 B(W, V ) for each child V of W .\nIt instructs each child V to play v.\nThe remainder of the downward pass is recursive.\nWhen a node V is instructed by its parent to adopt mixed strategy v, it does the following for each child U - It finds a pair (v, u) \u2208 B(V, U) (with the same v value that it was given by its parent) and instructs U to play u.\nThe best response policy for a vertex U given its parent V can be represented as a union of rectangles, where a rectangle is defined by a pair of closed intervals (IV , IU ) and consists of all points in IV \u00d7 IU ; it may be the case that one or both of the intervals IV and IU consists of a single point.\nIn order to perform computations on B(V, U), and to bound the number of rectangles, [9] used the notion of an event point, which is defined as follows.\nFor any set A \u2286 [0, 1]2 that is represented as a union of a finite number of rectangles, we say that a point u \u2208 [0, 1] on the U-axis is a Uevent point of A if u = 0 or u = 1 or the representation of A contains a rectangle of the form IV \u00d7 IU and u is an endpoint of IU ; V -event points are defined similarly.\nFor many games considered in this paper, the underlying graph is an n-vertex path, i.e., a graph G = (V, E) with V = {V1, ... , Vn} and E = {(V1, V2), ... , (Vn\u22121, Vn)}.\nIn [9], it was shown that for such games, the best response policy has only polynomially-many rectangles.\nThe proof that the number of rectangles in B(Vj+1, Vj) is polynomial proceeds by first showing that the number of event points in B(Vj+1, Vj ) cannot exceed the number of event points in B(Vj, Vj\u22121) by more than 2, and using this fact to bound the number of rectangles in B(Vj+1, Vj ).\nLet P0 (V ) and P1 (V ) be the expected payoffs to V when it plays 0 and 1, respectively.\nBoth P0 (V ) and P1 (V ) are multilinear functions of the strategies of V ``s neighbors.\nIn what follows, we will frequently use the following simple observation.\nCLAIM 1.\nFor a vertex V with a single child U and parent W , given any A, B, C, D \u2208 Q, A , B , C , D \u2208 Q, one can select the payoffs to V so that P0 (V ) = Auw + Bu + Cw + D, P1 (V ) = A uw + B u + C w + D .\nMoreover, if all A, B, C, D, A , B , C , D are integer, the payoffs to V are integer as well.\nPROOF.\nWe will give the proof for P0 (V ); the proof for P1 (V ) is similar.\nFor i, j = 0, 1, let Pij be the payoff to V when U plays i, V plays 0 and W plays j.\nWe have P0 (V ) = P00(1 \u2212 u)(1 \u2212 w) + P10u(1 \u2212 w) + P01(1 \u2212 u)w + P11uw.\nWe have to select the values of Pij so that P00 \u2212 P10 \u2212 P01 + P11 = A, \u2212P00 + P10 = B, \u2212P00 + P01 = C, P00 = D.\nIt is easy to see that the unique solution is given by P00 = D, P01 = C + D, P10 = B + D, P11 = A + B + C + D.\nThe input to all algorithms considered in this paper includes the payoff matrices for each player.\nWe assume that all elements of these matrices are integer.\nLet Pmax be the greatest absolute value of any element of any payoff matrix.\nThen the input consists of at most n2\u0394+1 numbers, each of which can be represented using log Pmax bits.\n3.\nNASH EQUILIBRIA THAT MAXIMIZE THE SOCIAL WELFARE: SOLUTIONS IN R \\ Q From the point of view of social welfare, the best Nash equilibrium is the one that maximizes the sum of the players'' expected payoffs.\nUnfortunately, it turns out that computing such a strategy profile exactly is not possible: in this section, we show that even if all players'' payoffs are integers, the strategy profile that maximizes the total payoff may have irrational coordinates; moreover, it may involve algebraic numbers of an arbitrary degree.\n3.1 Warm-up: quadratic irrationalities We start by providing an example of a graphical game on a path of length 3 with integer payoffs such that in the Nash equilibrium that maximizes the total payoff, one of the players has a strategy in R \\ Q.\nIn the next subsection, we will extend this example to algebraic numbers of arbitrary degree n; to do so, we have to consider paths of length O(n).\nTHEOREM 1.\nThere exists an integer-payoff graphical game G on a 3-vertex path UV W such that, in any Nash equilibrium of G that maximizes social welfare, the strategy, u, of the player U and the total payoff, p, satisfy u, p \u2208 R \\ Q. PROOF.\nThe payoffs to the players in G are specified as follows.\nThe payoff to U is identically 0, i.e., P0 (U) = P1 (U) = 0.\nUsing Claim 1, we select the payoffs to V so that P0 (V ) = \u2212uw + 3w and P1 (V ) = P0 (V ) + w(u + 2) \u2212 (u + 1), where u and w are the (mixed) strategies of U and W , respectively.\nIt follows that V is indifferent between playing 0 and 1 if and only if w = f(u) = u+1 u+2 .\nObserve that for any u \u2208 [0, 1] we have f(u) \u2208 [0, 1].\nThe payoff to W is 0 if it selects the same action as V and 1 otherwise.\nCLAIM 2.\nAll Nash equilibria of the game G are of the form (u, 1/2, f(u)).\nThat is, in any Nash equilibrium, V plays v = 1/2 and W plays w = f(u).\nMoreover, for any value of u, the vector of strategies (u, 1/2, f(u)) constitutes a Nash equilibrium.\nPROOF.\nIt is easy to check that for any u \u2208 [0, 1], the vector (u, 1/2, f(u)) is a Nash equilibrium.\nIndeed, U is content to play 164 any mixed strategy u no matter what V and W do.\nFurthermore, V is indifferent between 0 and 1 as long as w = f(u), so it can play 1/2.\nFinally, if V plays 0 and 1 with equal probability, W is indifferent between 0 and 1, so it can play f(u).\nConversely, suppose that v > 1/2.\nThen W strictly prefers to play 0, i.e., w = 0.\nThen for V we have P1 (V ) = P0 (V ) \u2212 (u + 1), i.e., P1 (V ) < P0 (V ), which implies v = 0, a contradiction.\nSimilarly, if v < 1/2, player W prefers to play 1, so we have w = 1.\nHence, P1 (V ) = P0 (V ) + (u + 2) \u2212 (u + 1), i.e., P1 (V ) > P0 (V ), which implies v = 1, a contradiction.\nFinally, if v = 1/2, but w = f(u), player V is not indifferent between 0 and 1, so he would deviate from playing 1/2.\nThis completes the proof of Claim 2.\nBy Claim 2, the total payoff in any Nash equilibrium of this game is a function of u.\nMore specifically, the payoff to U is 0, the payoff to V is \u2212uf(u) + 3f(u), and the payoff to W is 1/2.\nTherefore, the Nash equilibrium with the maximum total payoff corresponds to the value of u that maximizes g(u) = \u2212u (u + 1) u + 2 + 3 u + 1 u + 2 = \u2212 (u \u2212 3)(u + 1) u + 2 .\nTo find extrema of g(u), we compute h(u) = \u2212 d du g(u).\nWe have h(u) = (2u \u2212 2)(u + 2) \u2212 (u \u2212 3)(u + 1) (u + 2)2 = u2 + 4u \u2212 1 (u + 2)2 .\nHence, h(u) = 0 if and only if u \u2208 {\u22122 + \u221a 5, \u22122 \u2212 \u221a 5}.\nNote that \u22122 + \u221a 5 \u2208 [0, 1].\nThe function g(u) changes sign at \u22122, \u22121, and 3.\nWe have g(u) < 0 for g > 3, g(u) > 0 for u < \u22122, so the extremum of g(u) that lies between 1 and 3, i.e., u = \u22122 + \u221a 5, is a local maximum.\nWe conclude that the social welfare-maximizing Nash equilibrium for this game is given by the vector of strategies (\u22122+\u221a 5, 1/2, (5 \u2212 \u221a 5)/5).\nThe respective total payoff is 0 \u2212 ( \u221a 5 \u2212 5)( \u221a 5 \u2212 1) \u221a 5 + 1 2 = 13/2 \u2212 2 \u221a 5.\nThis concludes the proof of Theorem 1.\n3.2 Strategies of arbitrary degree We have shown that in the social welfare-maximizing Nash equilibrium, some players'' strategies can be quadratic irrationalities, and so can the total payoff.\nIn this subsection, we will extend this result to show that we can construct an integer-payoff graphical game on a path whose social welfare-maximizing Nash equilibrium involves arbitrary algebraic numbers in [0, 1].\nTHEOREM 2.\nFor any degree-n algebraic number \u03b1 \u2208 [0, 1], there exists an integer payoff graphical game on a path of length O(n) such that, in all social welfare-maximizing Nash equilibria of this game, one of the players plays \u03b1.\nPROOF.\nOur proof consists of two steps.\nFirst, we construct a rational expression R(x) and a segment [x , x ] such that x , x \u2208 Q and \u03b1 is the only maximum of R(x) on [x , x ].\nSecond, we construct a graphical game whose Nash equilibria can be parameterized by u \u2208 [x , x ], so that at the equilibrium that corresponds to u the total payoff is R(u) and, moreover, some player``s strategy is u.\nIt follows that to achieve the payoff-maximizing Nash equilibrium, this player has to play \u03b1.\nThe details follow.\nLEMMA 1.\nGiven an algebraic number \u03b1 \u2208 [0, 1], deg(\u03b1) = n, there exist K2, ... , K2n+2 \u2208 Q and x , x \u2208 (0, 1) \u2229 Q such that \u03b1 is the only maximum of R(x) = K2 x + 2 + \u00b7 \u00b7 \u00b7 + K2n+2 x + 2n + 2 on [x , x ].\nPROOF.\nLet P(x) be the minimal polynomial of \u03b1, i.e., a polynomial of degree n with rational coefficients whose leading coefficient is 1 such that P(\u03b1) = 0.\nLet A = {\u03b11, ... , \u03b1n} be the set of all roots of P(x).\nConsider the polynomial Q1(x) = \u2212P2 (x).\nIt has the same roots as P(x), and moreover, for any x \u2208 A we have Q1(x) < 0.\nHence, A is the set of all maxima of Q1(x).\nNow, set R(x) = Q1(x) (x+2)...(x+2n+1)(x+2n+2) .\nObserve that R(x) \u2264 0 for all x \u2208 [0, 1] and R(x) = 0 if and only if Q1(x) = 0.\nHence, the set A is also the set of all maxima of R(x) on [0, 1].\nLet d = min{|\u03b1i \u2212 \u03b1| | \u03b1i \u2208 A, \u03b1i = \u03b1}, and set \u03b1 = max{\u03b1 \u2212 d/2, 0}, \u03b1 = min{\u03b1 + d/2, 1}.\nClearly, \u03b1 is the only zero (and hence, the only maximum) of R(x) on [\u03b1 , \u03b1 ].\nLet x and x be some rational numbers in (\u03b1 , \u03b1) and (\u03b1, \u03b1 ), respectively; note that by excluding the endpoints of the intervals we ensure that x , x = 0, 1.\nAs [x , x ] \u2282 [\u03b1 , \u03b1 ], we have that \u03b1 is the only maximum of R(x) on [x , x ].\nAs R(x) is a proper rational expression and all roots of its denominator are simple, by partial fraction decomposition theorem, R(x) can be represented as R(x) = K2 x + 2 + \u00b7 \u00b7 \u00b7 + K2n+2 x + 2n + 2 , where K2, ... , K2n+2 are rational numbers.\nConsider a graphical game on the path U\u22121V\u22121U0V0U1V1 ... Uk\u22121Vk\u22121Uk, where k = 2n + 2.\nIntuitively, we want each triple (Ui\u22121, Vi\u22121, Ui) to behave similarly to the players U, V , and W from the game described in the previous subsection.\nMore precisely, we define the payoffs to the players in the following way.\n\u2022 The payoff to U\u22121 is 0 no matter what everyone else does.\n\u2022 The expected payoff to V\u22121 is 0 if it plays 0 and u0 \u2212 (x \u2212 x )u\u22121 \u2212x if it plays 1, where u0 and u\u22121 are the strategies of U0 and U\u22121, respectively.\n\u2022 The expected payoff to V0 is 0 if it plays 0 and u1(u0 + 1)\u2212 u0 if it plays 1, where u0 and u1 are the strategies of U0 and U1, respectively.\n\u2022 For each i = 1, ... , k \u2212 1, the expected payoff to Vi when it plays 0 is P0 (Vi) = Aiuiui+1 \u2212 Aiui+1, and the expected payoff to Vi when it plays 1 is P1 (Vi) = P0 (Vi) + ui+1(2 \u2212 ui) \u2212 1, where Ai = \u2212Ki+1 and ui+1 and ui are the strategies of Ui+1 and Ui, respectively.\n\u2022 For each i = 0, ... , k, the payoff to Ui does not depend on Vi and is 1 if Ui and Vi\u22121 select different actions and 0 otherwise.\nWe will now characterize the Nash equilibria of this game using a sequence of claims.\nCLAIM 3.\nIn all Nash equilibria of this game V\u22121 plays 1/2, and the strategies of u\u22121 and u0 satisfy u0 = (x \u2212 x )u\u22121 + x .\nConsequently, in all Nash equilibria we have u0 \u2208 [x , x ].\n165 PROOF.\nThe proof is similar to that of Claim 2.\nLet f(u\u22121) = (x \u2212 x )u\u22121 + x .\nClearly, the player V\u22121 is indifferent between playing 0 and 1 if and only if u0 = f(u\u22121).\nSuppose that v\u22121 < 1/2.\nThen U0 strictly prefers to play 1, i.e., u0 = 1, so we have P1 (V\u22121) = P0 (V\u22121) + 1 \u2212 (x \u2212 x )u\u22121 \u2212 x .\nAs 1 \u2212 x \u2264 1 \u2212 (x \u2212 x )u\u22121 \u2212 x \u2264 1 \u2212 x for u\u22121 \u2208 [0, 1] and x < 1, we have P1 (V\u22121) > P0 (V\u22121), so V\u22121 prefers to play 1, a contradiction.\nSimilarly, if v\u22121 > 1/2, the player U0 strictly prefers to play 0, i.e., u0 = 0, so we have P1 (V\u22121) = P0 (V\u22121) \u2212 (x \u2212 x )u\u22121 \u2212 x .\nAs x < x , x > 0, we have P1 (V\u22121) < P0 (V\u22121), so V\u22121 prefers to play 0, a contradiction.\nFinally, if V\u22121 plays 1/2, but u0 = f(u\u22121), player V\u22121 is not indifferent between 0 and 1, so he would deviate from playing 1/2.\nAlso, note that f(0) = x , f(1) = x , and, moreover, f(u\u22121) \u2208 [x , x ] if and only if u\u22121 \u2208 [0, 1].\nHence, in all Nash equilibria of this game we have u0 \u2208 [x , x ].\nCLAIM 4.\nIn all Nash equilibria of this game for each i = 0, ... , k \u2212 1, we have vi = 1/2, and the strategies of the players Ui and Ui+1 satisfy ui+1 = fi(ui), where f0(u) = u/(u + 1) and fi(u) = 1/(2 \u2212 u) for i > 0.\nPROOF.\nThe proof of this claim is also similar to that of Claim 2.\nWe use induction on i to prove that the statement of the claim is true and, additionally, ui = 1 for i > 0.\nFor the base case i = 0, note that u0 = 0 by the previous claim (recall that x , x are selected so that x , x = 0, 1) and consider the triple (U0, V0, U1).\nLet v0 be the strategy of V0.\nFirst, suppose that v0 > 1/2.\nThen U1 strictly prefers to play 0, i.e., u1 = 0.\nThen for V0 we have P1 (V0) = P0 (V0) \u2212 u0.\nAs u0 = 0, we have P1 (V0) < P0 (V0), which implies v1 = 0, a contradiction.\nSimilarly, if v0 < 1/2, player U1 prefers to play 1, so we have u1 = 1.\nHence, P1 (V0) = P0 (V0) + 1.\nIt follows that P1 (V0) > P0 (V0), which implies v0 = 1, a contradiction.\nFinally, if v0 = 1/2, but u1 = u0/(u0 + 1), player V0 is not indifferent between 0 and 1, so he would deviate from playing 1/2.\nMoreover, as u1 = u0/(u0 + 1) and u0 \u2208 [0, 1], we have u1 = 1.\nThe argument for the inductive step is similar.\nNamely, suppose that the statement is proved for all i < i and consider the triple (Ui, Vi, Ui+1).\nLet vi be the strategy of Vi.\nFirst, suppose that vi > 1/2.\nThen Ui+1 strictly prefers to play 0, i.e., ui+1 = 0.\nThen for Vi we have P1 (Vi) = P0 (Vi)\u22121, i.e., P1 (Vi) < P0 (Vi), which implies vi = 0, a contradiction.\nSimilarly, if vi < 1/2, player Ui+1 prefers to play 1, so we have ui+1 = 1.\nHence, P1 (Vi) = P0 (Vi) + 1 \u2212 ui.\nBy inductive hypothesis, we have ui < 1.\nConsequently, P1 (Vi) > P0 (Vi), which implies vi = 1, a contradiction.\nFinally, if vi = 1/2, but ui+1 = 1/(2 \u2212 ui), player Vi is not indifferent between 0 and 1, so he would deviate from playing 1/2.\nMoreover, as ui+1 = 1/(2 \u2212 ui) and ui < 1, we have ui+1 < 1.\nCLAIM 5.\nAny strategy profile of the form (u\u22121, 1/2, u0, 1/2, u1, 1/2, ... , uk\u22121, 1/2, uk), where u\u22121 \u2208 [0, 1], u0 = (x \u2212 x )u\u22121 + x , u1 = u0/(u0 + 1), and ui+1 = 1/(2 \u2212 ui) for i \u2265 1 constitutes a Nash equilibrium.\nPROOF.\nFirst, the player U\u22121``s payoffs do not depend on other players'' actions, so he is free to play any strategy in [0, 1].\nAs long as u0 = (x \u2212x )u\u22121 +x , player V\u22121 is indifferent between 0 and 1, so he is content to play 1/2; a similar argument applies to players V0, ... , Vk\u22121.\nFinally, for each i = 0, ... , k, the payoffs of player Ui only depend on the strategy of player Vi\u22121.\nIn particular, as long as vi\u22121 = 1/2, player Ui is indifferent between playing 0 and 1, so he can play any mixed strategy ui \u2208 [0, 1].\nTo complete the proof, note that (x \u2212 x )u\u22121 + x \u2208 [0, 1] for all u\u22121 \u2208 [0, 1], u0/(u0 + 1) \u2208 [0, 1] for all u0 \u2208 [0, 1], and 1/(2 \u2212 ui) \u2208 [0, 1] for all ui \u2208 [0, 1], so we have ui \u2208 [0, 1] for all i = 0, ... , k. Now, let us compute the total payoff under a strategy profile of the form given in Claim 5.\nThe payoff to U\u22121 is 0, and the expected payoff to each of the Ui, i = 0, ... , k, is 1/2.\nThe expected payoffs to V\u22121 and V0 are 0.\nFinally, for any i = 1, ... , k \u2212 1, the expected payoff to Vi is Ti = Aiuiui+1 \u2212 Aiui+1.\nIt follows that to find a Nash equilibrium with the highest total payoff, we have to maximize Pk\u22121 i=1 Ti subject to conditions u\u22121 \u2208 [0, 1], u0 = (x \u2212x )u\u22121+x , u1 = u0/(u0+1), and ui+1 = 1/(2\u2212ui) for i = 1, ... , k \u2212 1.\nWe would like to express Pk\u22121 i=1 Ti as a function of u0.\nTo simplify notation, set u = u0.\nLEMMA 2.\nFor i = 1, ... , k, we have ui = u+i\u22121 u+i .\nPROOF.\nThe proof is by induction on i. For i = 1, we have u1 = u/(u + 1).\nNow, for i \u2265 2 suppose that ui\u22121 = (u + i \u2212 2)/(u + i \u2212 1).\nWe have ui = 1/(2 \u2212 ui\u22121) = (u + i \u2212 1)/(2u + 2i \u2212 2 \u2212 u \u2212 i + 2) = (u + i \u2212 1)/(u + i).\nIt follows that for i = 1, ... , k \u2212 1 we have Ti = Ai u + i \u2212 1 u + i u + i u + i + 1 \u2212 Ai u + i u + i + 1 = \u2212Ai 1 u + i + 1 = Ki+1 u + i + 1 .\nObserve that as u\u22121 varies from 0 to 1, u varies from x to x .\nTherefore, to maximize the total payoff, we have to choose u \u2208 [x , x ] so as to maximize K2 u + 2 + \u00b7 \u00b7 \u00b7 + Kk u + k = R(u).\nBy construction, the only maximum of R(u) on [x , x ] is \u03b1.\nIt follows that in the payoff-maximizing Nash equilibrium of our game U0 plays \u03b1.\nFinally, note that the payoffs in our game are rational rather than integer.\nHowever, it is easy to see that we can multiply all payoffs to a player by their greatest common denominator without affecting his strategy.\nIn the resulting game, all payoffs are integer.\nThis concludes the proof of Theorem 2.\n4.\nAPPROXIMATING THE SOCIALLY OPTIMAL NASH EQUILIBRIUM We have seen that the Nash equilibrium that maximizes the social welfare may involve strategies that are not in Q. Hence, in this section we focus on finding a Nash equilibrium that is almost optimal from the social welfare perspective.\nWe propose an algorithm that for any > 0 finds a Nash equilibrium whose total payoff is within from optimal.\nThe running time of this algorithm is polynomial in 1/ , n and |Pmax| (recall that Pmax is the maximum absolute value of an entry of a payoff matrix).\nWhile the negative result of the previous section is for graphical games on paths, our algorithm applies to a wider range of scenarios.\nNamely, it runs in polynomial time on bounded-degree trees 166 as long as the best response policy of each vertex, given its parent, can be represented as a union of a polynomial number of rectangles.\nNote that path graphs always satisfy this condition: in [9] we showed how to compute such a representation, given a graph with maximum degree 2.\nConsequently, for path graphs the running time of our algorithm is guaranteed to be polynomial.\n(Note that [9] exhibits a family of graphical games on bounded-degree trees for which the best response policies of some of the vertices, given their parents, have exponential size, when represented as unions of rectangles.)\nDue to space restrictions, in this version of the paper we present the algorithm for the case where the graph underlying the graphical game is a path.\nWe then state our result for the general case; the proof can be found in the full version of this paper [10].\nSuppose that s is a strategy profile for a graphical game G.\nThat is, s assigns a mixed strategy to each vertex of G. let EPV (s) be the expected payoff of player V under s and let EP(s) =P V EPV (s).\nLet M(G) = max{EP(s) | s is a Nash equilibrium for G}.\nTHEOREM 3.\nSuppose that G is a graphical game on an nvertex path.\nThen for any > 0 there is an algorithm that constructs a Nash equilibrium s for G that satisfies EP(s ) \u2265 M(G)\u2212 .\nThe running time of the algorithm is O(n4 P3 max/ 3 ) PROOF.\nLet {V1, ... , Vn} be the set of all players.\nWe start by constructing the best response policies for all Vi, i = 1, ... , n \u2212 1.\nAs shown in [9], this can be done in time O(n3 ).\nLet N > 5n be a parameter to be selected later, set \u03b4 = 1/N, and define X = {j\u03b4 | j = 0, ... , N}.\nWe say that vj is an event point for a player Vi if it is a Vi-event point for B(Vi, Vi\u22121) or B(Vi+1, Vi).\nFor each player Vi, consider a finite set of strategies Xi given by Xi = X \u222a {vj |vj is an event point for Vi}.\nIt has been shown in [9] that for any i = 2, ... , n, the best response policy B(Vi, Vi\u22121) has at most 2n + 4 Vi-event points.\nAs we require N > 5n, we have |Xi| \u2264 2N; assume without loss of generality that |Xi| = 2N.\nOrder the elements of Xi in increasing order as x1 i = 0 < x2 i < \u00b7 \u00b7 \u00b7 < x2N i .\nWe will refer to the strategies in Xi as discrete strategies of player Vi; a strategy profile in which each player has a discrete strategy will be referred to as a discrete strategy profile.\nWe will now show that even we restrict each player Vi to strategies from Xi, the players can still achieve a Nash equilibrium, and moreover, the best such Nash equilibrium (with respect to the social welfare) has total payoff at least M(G) \u2212 as long as N is large enough.\nLet s be a strategy profile that maximizes social welfare.\nThat is, let s = (s1, ... , sn) where si is the mixed strategy of player Vi and EP(s) = M(G).\nFor i = 1, ... , n, let ti = max{xj i | xj i \u2264 si}.\nFirst, we will show that the strategy profile t = (t1, ... , tn) is a Nash equilibrium for G. Fix any i, 1 < i \u2264 n, and let R = [v1, v2]\u00d7[u1, u2] be the rectangle in B(Vi, Vi\u22121) that contains (si, si\u22121).\nAs v1 is a Vi-event point of B(Vi, Vi\u22121), we have v1 \u2264 ti, so the point (ti, si\u22121) is inside R. Similarly, the point u1 is a Vi\u22121-event point of B(Vi, Vi\u22121), so we have u1 \u2264 ti\u22121, and therefore the point (ti, ti\u22121) is inside R.\nThis means that for any i, 1 < i \u2264 n, we have ti\u22121 \u2208 pbrVi\u22121 (ti), which implies that t = (t1, ... , tn) is a Nash equilibrium for G. Now, let us estimate the expected loss in social welfare caused by playing t instead of s. LEMMA 3.\nFor any pair of strategy profiles t, s such that |ti \u2212 si| \u2264 \u03b4 we have |EPVi (s) \u2212 EPVi (t)| \u2264 24Pmax\u03b4 for any i = 1, ... , n. PROOF.\nLet Pi klm be the payoff of the player Vi, when he plays k, Vi\u22121 plays l, and Vi+1 plays m. Fix i = 1, ... , n and for k, l, m \u2208 {0, 1}, set tklm = tk i\u22121(1 \u2212 ti\u22121)1\u2212k tl i(1 \u2212 ti)1\u2212l tm i+1(1 \u2212 ti+1)1\u2212m sklm = sk i\u22121(1 \u2212 si\u22121)1\u2212k sl i(1 \u2212 si)1\u2212l sm i+1(1 \u2212 si+1)1\u2212m .\nWe have |EPVi (s) \u2212 EPVi (t)| \u2264 X k,l,m=0,1 |Pi klm(tklm \u2212 sklm )| \u2264 8Pmax max klm |tklm \u2212 sklm | We will now show that for any k, l, m \u2208 {0, 1} we have |tklm \u2212 sklm | \u2264 3\u03b4; clearly, this implies the lemma.\nIndeed, fix k, l, m \u2208 {0, 1}.\nSet x = tk i\u22121(1 \u2212 ti\u22121)1\u2212k , x = sk i\u22121(1 \u2212 si\u22121)1\u2212k , y = tl i(1 \u2212 ti)1\u2212l , y = sl i(1 \u2212 si)1\u2212l , z = tm i+1(1 \u2212 ti+1)1\u2212m , z = sm i+1(1 \u2212 si+1)1\u2212m .\nObserve that if k = 0 then x \u2212 x = (1 \u2212 ti\u22121) \u2212 (1 \u2212 si\u22121), and if k = 1 then x \u2212 x = ti\u22121 \u2212 si\u22121, so |x \u2212 x | \u2264 \u03b4.\nA similar argument shows |y \u2212 y | \u2264 \u03b4, |z \u2212 z | \u2264 \u03b4.\nAlso, we have x, x , y, y , z, z \u2208 [0, 1].\nHence, |tklm \u2212sklm | = |xyz\u2212x y z | = |xyz \u2212 x yz + x yz \u2212 x y z + x y z \u2212 x y z | \u2264 |x \u2212 x |yz + |y \u2212 y |x z + |z \u2212 z |x y \u2264 3\u03b4.\nLemma 3 implies Pn i=1 |EPVi (s) \u2212 EPVi (t)| \u2264 24nPmax\u03b4, so by choosing \u03b4 < /(24nPmax), or, equivalently, setting N > 24nPmax/ , we can ensure that the total expected payoff for the strategy profile t is within from optimal.\nWe will now show that we can find the best discrete Nash equilibrium (with respect to the social welfare) using dynamic programming.\nAs t is a discrete strategy profile, this means that the strategy profile found by our algorithm will be at least as good as t. Define ml,k i to be the maximum total payoff that V1, ... , Vi\u22121 can achieve if each Vj , j \u2264 i, chooses a strategy from Xj , for each j < i the strategy of Vj is a potential best response to the strategy of Vj+1, and, moreover, Vi\u22121 plays xl i\u22121, Vi plays xk i .\nIf there is no way to choose the strategies for V1, ... , Vi\u22121 to satisfy these conditions, we set ml,k i = \u2212\u221e.\nThe values ml,k i , i = 1, ... , n; k, l = 1, ... , N, can be computed inductively, as follows.\nWe have ml,k 1 = 0 for k, l = 1, ... , N. Now, suppose that we have already computed ml,k j for all j < i; k, l = 1, ... , N. To compute mk,l i , we first check if (xk i , xl i\u22121) \u2208 B(Vi, Vi\u22121).\nIf this is not the case, we have ml,k i = \u2212\u221e.\nOtherwise, consider the set Y = Xi\u22122 \u2229 pbrVi\u22122 (xl i\u22121), i.e., the set of all discrete strategies of Vi\u22122 that are potential best responses to xl i\u22121.\nThe proof of Theorem 1 in [9] implies that the set pbrVi\u22122 (xl i\u22121) is non-empty: the player Vi\u22122 has a potential best response to any strategy of Vi\u22121, in particular, xl i\u22121.\nBy construction of the set Xi\u22122, this implies that Y is not empty.\nFor each xj i\u22122 \u2208 Y , let pjlk be the payoff that Vi\u22121 receives when Vi\u22122 plays xj i\u22122, Vi\u22121 plays xl i\u22121, and Vi plays xk i .\nClearly, pjlk can be computed in constant time.\nThen we have ml,k i = max{mj,l i\u22121 + pjlk | xj i\u22122 \u2208 Y }.\nFinally, suppose that we have computed ml,k n for l, k = 1, ... , N.\nWe still need to take into account the payoff of player Vn.\nHence, 167 we consider all pairs (xk n, xl n\u22121) that satisfy xl n\u22121 \u2208 pbrVn\u22121 (xk n), and pick the one that maximizes the sum of mk,l n and the payoff of Vn when he plays xk n and Vn\u22121 plays xl n\u22121.\nThis results in the maximum total payoff the players can achieve in a Nash equilibrium using discrete strategies; the actual strategy profile that produces this payoff can be reconstructed using standard dynamic programming techniques.\nIt is easy to see that each ml,k i can be computed in time O(N), i.e., all of them can be computed in time O(nN3 ).\nRecall that we have to select N \u2265 (24nPmax)/ to ensure that the strategy profile we output has total payoff that is within from optimal.\nWe conclude that we can compute an -approximation to the best Nash equilibrium in time O(n4 P3 max/ 3 ).\nThis completes the proof of Theorem 3.\nTo state our result for the general case (i.e., when the underlying graph is a bounded-degree tree rather than a path), we need additional notation.\nIf G has n players, let q(n) be an upper bound on the number of event points in the representation of any best response policy.\nThat is, we assume that for any vertex U with parent V , B(V, U) has at most q(n) event points.\nWe will be interested in the situation in which q(n) is polynomial in n. THEOREM 4.\nLet G be an n-player graphical game on a tree in which each node has at most \u0394 children.\nSuppose we are given a set of best-response policies for G in which each best-response policy B(V, U) is represented by a set of rectangles with at most q(n) event points.\nFor any > 0, there is an algorithm that constructs a Nash equilibrium s for G that satisfies EP(s ) \u2265 M(G) \u2212 .\nThe running time of the algorithm is polynomial in n, Pmax and \u22121 provided that the tree has bounded degree (that is, \u0394 = O(1)) and q(n) is a polynomial in n.\nIn particular, if N = max((\u0394 + 1)q(n) + 1, n2\u0394+2 (\u0394 + 2)Pmax \u22121 ) and \u0394 > 1 then the running time is O(n\u0394(2N)\u0394 .\nFor the proof of this theorem, see [10].\n4.1 A polynomial-time algorithm for multiplicative approximation The running time of our algorithm is pseudopolynomial rather than polynomial, because it includes a factor which is polynomial in Pmax, the maximum (in absolute value) entry in any payoff matrix.\nIf we are interested in multiplicative approximation rather than additive one, this can be improved to polynomial.\nFirst, note that we cannot expect a multiplicative approximation for all inputs.\nThat is, we cannot hope to have an algorithm that computes a Nash equilibrium with total payoff at least (1 \u2212 )M(G).\nIf we had such an algorithm, then for graphical games G with M(G) = 0, the algorithm would be required to output the optimal solution.\nTo show that this is infeasible, observe that we can use the techniques of Section 3.2 to construct two integercoefficient graphical games on paths of length O(n) such that for some X \u2208 R the maximal total payoff in the first game is X, the maximal total payoff in the second game is \u2212X, and for both games, the strategy profiles that achieve the maximal total payoffs involve algebraic numbers of degree n. By combining the two games so that the first vertex of the second game becomes connected to the last vertex of the first game, but the payoffs of all players do not change, we obtain a graphical game in which the best Nash equilibrium has total payoff 0, yet the strategies that lead to this payoff have high algebraic complexity.\nHowever, we can achieve a multiplicative approximation when all entries of the payoff matrices are positive and the ratio between any two entries is polynomially bounded.\nRecall that we assume that all payoffs are integer, and let Pmin > 0 be the smallest entry of any payoff matrix.\nIn this case, for any strategy profile the payoff to player i is at least Pmin, so the total payoff in the social-welfare maximizing Nash equilibrium s satisfies M(G) \u2265 nPmin.\nMoreover, Lemma 3 implies that by choosing \u03b4 < /(24Pmax/Pmin), we can ensure that the Nash equilibrium t produced by our algorithm satisfies nX i=1 EPVi (s) \u2212 nX i=1 EPVi (t) \u2264 24Pmax\u03b4n \u2264 nPmin \u2264 M(G), i.e., for this value of \u03b4 we have Pn i=1 EPVi (t) \u2265 (1 \u2212 )M(G).\nRecall that the running time of our algorithm is O(nN3 ), where N has to be selected to satisfy N > 5n, N = 1/\u03b4.\nIt follows that if Pmin > 0, Pmax/Pmin = poly(n), we can choose N so that our algorithm provides a multiplicative approximation guarantee and runs in time polynomial in n and 1/ .\n5.\nBOUNDED PAYOFF NASH EQUILIBRIA Another natural way to define what is a good Nash equilibrium is to require that each player``s expected payoff exceeds a certain threshold.\nThese thresholds do not have to be the same for all players.\nIn this case, in addition to the payoff matrices of the n players, we are given n numbers T1, ... , Tn, and our goal is to find a Nash equilibrium in which the payoff of player i is at least Ti, or report that no such Nash equilibrium exists.\nIt turns out that we can design an FPTAS for this problem using the same techniques as in the previous section.\nTHEOREM 5.\nGiven a graphical game G on an n-vertex path and n rational numbers T1, ... , Tn, suppose that there exists a strategy profile s such that s is a Nash equilibrium for G and EPVi (s) \u2265 Ti for i = 1, ... , n.\nThen for any > 0 we can find in time O(max{nP3 max/ 3 , n4 / 3 }) a strategy profile s such that s is a Nash equilibrium for G and EPVi (s ) \u2265 Ti \u2212 for i = 1, ... , n. PROOF.\nThe proof is similar to that of Theorem 3.\nFirst, we construct the best response policies for all players, choose N > 5n, and construct the sets Xi, i = 1, ... , n, as described in the proof of Theorem 3.\nConsider a strategy profile s such that s is a Nash equilibrium for G and EPVi (s) \u2265 Ti for i = 1, ... , n.\nWe construct a strategy profile ti = max{xj i | xj i \u2264 si} and use the same argument as in the proof of Theorem 3 to show that t is a Nash equilibrium for G. By Lemma 3, we have |EPVi (s) \u2212 EPVi (t)| \u2264 24Pmax\u03b4, so choosing \u03b4 < /(24Pmax), or, equivalently, N > max{5n, 24Pmax/ }, we can ensure EPVi (t) \u2265 Ti \u2212 for i = 1, ... , n. Now, we will use dynamic programming to find a discrete Nash equilibrium that satisfies EPVi (t) \u2265 Ti \u2212 for i = 1, ... , n.\nAs t is a discrete strategy profile, our algorithm will succeed whenever there is a strategy profile s with EPVi (s) \u2265 Ti\u2212 for i = 1, ... , n. Let zl,k i = 1 if there is a discrete strategy profile such that for any j < i the strategy of the player Vj is a potential best response to the strategy of Vj+1, the expected payoff of Vj is at least Tj \u2212 , and, moreover, Vi\u22121 plays xl i\u22121, Vi plays xk i .\nOtherwise, let zl,k i = 0.\nWe can compute zl,k i , i = 1, ... , n; k, l = 1, ... , N inductively, as follows.\nWe have zl,k 1 = 1 for k, l = 1, ... , N. Now, suppose that we have already computed zl,k j for all j < i; k, l = 1, ... , N. To compute zk,l i , we first check if (xk i , xl i\u22121) \u2208 B(Vi, Vi\u22121).\nIf this 168 is not the case, clearly, zk,l i = 0.\nOtherwise, consider the set Y = Xi\u22122 \u2229pbrVi\u22122 (xl i\u22121), i.e., the set of all discrete strategies of Vi\u22122 that are potential best responses to xl i\u22121.\nIt has been shown in the proof of Theorem 3 that Y = \u2205.\nFor each xj i\u22122 \u2208 Y , let pjlk be the payoff that Vi\u22121 receives when Vi\u22122 plays xj i\u22122, Vi\u22121 plays xl i\u22121, and Vi plays xk i .\nClearly, pjlk can be computed in constant time.\nIf there exists an xj i\u22122 \u2208 Y such that zj,l i\u22121 = 1 and pjlk \u2265 Ti\u22122 \u2212 , set zl,k i = 1.\nOtherwise, set zl,k i = 0.\nHaving computed zl,k n , l, k = 1, ... , N, we check if zl,k n = 1 for some pair (l, k).\nif such a pair of indices exists, we instruct Vn to play xk n and use dynamic programming techniques (or, equivalently, the downstream pass of the algorithm of [13]) to find a Nash equilibrium s that satisfies EPVi (s ) \u2265 Ti \u2212 for i = 1, ... , n (recall that Vn is a dummy player, i.e., we assume Tn = 0, EPn(s ) = 0 for any choice of s ).\nIf zl,k n = 0 for all l, k = 1, ... , N, there is no discrete Nash equilibrium s that satisfies EPVi (s ) \u2265 Ti \u2212 for i = 1, ... , n and hence no Nash equilibrium s (not necessarily discrete) such that EPVi (s) \u2265 Ti for i = 1, ... , n.\nThe running time analysis is similar to that for Theorem 3; we conclude that the running time of our algorithm is O(nN3 ) = O(max{nP3 max/ 3 , n4 / 3 }).\nREMARK 1.\nTheorem 5 can be extended to trees of bounded degree in the same way as Theorem 4.\n5.1 Exact Computation Another approach to finding Nash equilibria with bounded payoffs is based on inductively computing the subsets of the best response policies of all players so as to exclude the points that do not provide sufficient payoffs to some of the players.\nFormally, we say that a strategy v of the player V is a potential best response to a strategy w of its parent W with respect to a threshold vector T = (T1, ... , Tn), (denoted by v \u2208 pbrV (w, T)) if there is an equilibrium in the instance G(V,W ),W =w in which V plays mixed strategy v and the payoff to any player Vi downstream of V (including V ) is at least Ti.\nThe best response policy for V with respect to a threshold vector T is defined as B(W, V, T) = {(w, v) | v \u2208 pbrV (w, T), w \u2208 [0, 1]}.\nIt is easy to see that if any of the sets B(Vj, Vj\u22121, T), j = 1, ... , n, is empty, it means that it is impossible to provide all players with expected payoffs prescribed by T. Otherwise, one can apply the downstream pass of the original algorithm of [13] to find a Nash equilibrium.\nAs we assume that Vn is a dummy vertex whose payoff is identically 0, the Nash equilibrium with these payoffs exists as long as Tn \u2264 0 and B(Vn, Vn\u22121, T) is not empty.\nUsing the techniques developed in [9], it is not hard to show that for any j = 1, ... , n, the set B(Vj , Vj\u22121, T) consists of a finite number of rectangles, and one can compute B(Vj+1, Vj , T) given B(Vj , Vj\u22121, T).\nThe advantage of this approach is that it allows us to represent all Nash equilibria that provide required payoffs to the players.\nHowever, it is not likely to be practical, since it turns out that the rectangles that appear in the representation of B(Vj , Vj\u22121, T) may have irrational coordinates.\nCLAIM 6.\nThere exists a graphical game G on a 3-vertex path UV W and a vector T = (T1, T2, T3) such that B(V, W, T) cannot be represented as a union of a finite number of rectangles with rational coordinates.\nPROOF.\nWe define the payoffs to the players in G as follows.\nThe payoff to U is identically 0, i.e., P0 (U) = P1 (U) = 0.\nUsing Claim 1, we select the payoffs to V so that P0 (V ) = uw, P1 (V ) = P0 (V ) + w \u2212 .8u \u2212 .1, where u and w are the (mixed) strategies of U and W , respectively.\nIt follows that V is indifferent between playing 0 and 1 if and only if w = f(u) = .8u + .1; observe that for any u \u2208 [0, 1] we have f(u) \u2208 [0, 1].\nIt is not hard to see that we have B(W, V ) = [0, .1]\u00d7{0} \u222a [.1, .9]\u00d7[0, 1] \u222a [.9, 1]\u00d7{1}.\nThe payoffs to W are not important for our construction; for example, set P0(W ) = P0(W ) = 0.\nNow, set T = (0, 1/8, 0), i.e., we are interested in Nash equilibria in which V ``s expected payoff is at least 1/8.\nSuppose w \u2208 [0, 1].\nThe player V can play a mixed strategy v when W is playing w as long as U plays u = f\u22121 (w) = 5w/4 \u2212 1/8 (to ensure that V is indifferent between 0 and 1) and P0 (V ) = P1 (V ) = uw = w(5w/4 \u2212 1/8) \u2265 1/8.\nThe latter condition is satisfied if w \u2264 (1 \u2212 \u221a 41)/20 < 0 or w \u2265 (1 + \u221a 41)/20.\nNote that we have .1 < (1 + \u221a 41)/20 < .9.\nFor any other value of w, any strategy of U either makes V prefer one of the pure strategies or does not provide it with a sufficient expected payoff.\nThere are also some values of w for which V can play a pure strategy (0 or 1) as a potential best response to W and guarantee itself an expected payoff of at least 1/8; it can be shown that these values of w form a finite number of segments in [0, 1].\nWe conclude that any representation of B(W, V, T) as a union of a finite number of rectangles must contain a rectangle of the form [(1 + \u221a 41)/20, w ]\u00d7[v , v ] for some w , v , v \u2208 [0, 1].\nOn the other hand, it can be shown that for any integer payoff matrices and threshold vectors and any j = 1, ... , n \u2212 1, the sets B(Vj+1, Vj, T) contain no rectangles of the form [u , u ]\u00d7{v} or {v}\u00d7[w , w ], where v \u2208 R\\Q.\nThis means that if B(Vn, Vn\u22121, T) is non-empty, i.e., there is a Nash equilibrium with payoffs prescribed by T, then the downstream pass of the algorithm of [13] can always pick a strategy profile that forms a Nash equilibrium, provides a payoff of at least Ti to the player Vi, and has no irrational coordinates.\nHence, unlike in the case of the Nash equilibrium that maximizes the social welfare, working with irrational numbers is not necessary, and the fact that the algorithm discussed in this section has to do so can be seen as an argument against using this approach.\n6.\nOTHER CRITERIA FOR SELECTING A NASH EQUILIBRIUM In this section, we consider several other criteria that can be useful in selecting a Nash equilibrium.\n6.1 Combining welfare maximization with bounds on payoffs In many real life scenarios, we want to maximize the social welfare subject to certain restrictions on the payoffs to individual players.\nFor example, we may want to ensure that no player gets a negative expected payoff, or that the expected payoff to player i is at least Pi max \u2212 \u03be, where Pi max is the maximum entry of i``s payoff matrix and \u03be is a fixed parameter.\nFormally, given a graphical game G and a vector T1, ... , Tn, let S be the set of all Nash equilibria s of G that satisfy Ti \u2264 EPVi (s) for i = 1, ... , n, and let \u02c6s = argmaxs\u2208S EP(s).\nIf the set S is non-empty, we can find a Nash equilibrium \u02c6s that is -close to satisfying the payoff bounds and is within from \u02c6s with respect to the total payoff by combining the algorithms of Section 4 and Section 5.\nNamely, for a given > 0, choose \u03b4 as in the proof of Theorem 3, and let Xi be the set of all discrete strategies of player Vi (for a 169 formal definition, see the proof of Theorem 3).\nCombining the proofs of Theorem 3 and Theorem 5, we can see that the strategy profile \u02c6t given by \u02c6ti = max{xj i | xj i \u2264 \u02c6si} satisfies EPVi (\u02c6t) \u2265 Ti \u2212 , |EP(\u02c6s) \u2212 EP(\u02c6t)| \u2264 .\nDefine \u02c6ml,k i to be the maximum total payoff that V1, ... , Vi\u22121 can achieve if each Vj, j \u2264 i, chooses a strategy from Xj , for each j < i the strategy of Vj is a potential best response to the strategy of Vj+1 and the payoff to player Vj is at least Tj \u2212 , and, moreover, Vi\u22121 plays xl i\u22121, Vi plays xk i .\nIf there is no way to choose the strategies for V1, ... , Vi\u22121 to satisfy these conditions, we set ml,k i = \u2212\u221e.\nThe \u02c6ml,k i can be computed by dynamic programming similarly to the ml,k i and zl,k i in the proofs of Theorems 3 and 5.\nFinally, as in the proof of Theorem 3, we use ml,k n to select the best discrete Nash equilibrium subject to the payoff constraints.\nEven more generally, we may want to maximize the total payoff to a subset of players (who are assumed to be able to redistribute the profits fairly among themselves) while guaranteeing certain expected payoffs to (a subset of) the other players.\nThis problem can be handled similarly.\n6.2 A minimax approach A more egalitarian measure of the quality of a Nash equilibrium is the minimal expected payoff to a player.\nThe optimal solution with respect to this measure is a Nash equilibrium in which the minimal expected payoff to a player is maximal.\nTo find an approximation to such a Nash equilibrium, we can combine the algorithm of Section 5 with binary search on the space of potential lower bounds.\nNote that the expected payoff to any player Vi given a strategy s always satisfies \u2212Pmax \u2264 EPVi (s) \u2264 Pmax.\nFor a fixed > 0, we start by setting T = \u2212Pmax, T = Pmax, T\u2217 = (T + T )/2.\nWe then run the algorithm of Section 5 with T1 = \u00b7 \u00b7 \u00b7 = Tn = T\u2217 .\nIf the algorithm succeeds in finding a Nash equilibrium s that satisfies EPVi (s ) \u2265 T\u2217 \u2212 for all i = 1, ... , n, we set T = T\u2217 , T\u2217 = (T + T )/2; otherwise, we set T = T\u2217 , T\u2217 = (T + T )/2 and loop.\nWe repeat this process until |T \u2212 T | \u2264 .\nIt is not hard to check that for any p \u2208 R, if there is a Nash equilibrium s such that mini=1,...,n EPVi (s) \u2265 p, then our algorithm outputs a Nash equilibrium s that satisfies mini=1,...,n EPVi (s) \u2265 p\u22122 .\nThe running time of our algorithm is O(max{nP3 max log \u22121 / 3 , n4 log \u22121 / 3 }).\n6.3 Equalizing the payoffs When the players'' payoff matrices are not very different, it is reasonable to demand that the expected payoffs to the players do not differ by much either.\nWe will now show that Nash equilibria in this category can be approximated in polynomial time as well.\nIndeed, observe that the algorithm of Section 5 can be easily modified to deal with upper bounds on individual payoffs rather than lower bounds.\nMoreover, we can efficiently compute an approximation to a Nash equilibrium that satisfies both the upper bound and the lower bound for each player.\nMore precisely, suppose that we are given a graphical game G, 2n rational numbers T1, ... , Tn, T1, ... , Tn and > 0.\nThen if there exists a strategy profile s such that s is a Nash equilibrium for G and Ti \u2264 EPVi (s) \u2264 Ti for i = 1, ... , n, we can find a strategy profile s such that s is a Nash equilibrium for G and Ti \u2212 \u2264 EPVi (s ) \u2264 Ti + for i = 1, ... , n.\nThe modified algorithm also runs in time O(max{nP3 max/ 3 , [4]n4 / 3 }).\nThis observation allows us to approximate Nash equilibria in which all players'' expected payoffs differ by at most \u03be for any fixed \u03be > 0.\nGiven an > 0, we set T1 = \u00b7 \u00b7 \u00b7 = Tn = \u2212Pmax, T1 = \u00b7 \u00b7 \u00b7 = Tn = \u2212Pmax + \u03be + , and run the modified version of the algorithm of Section 5.\nIf it fails to find a solution, we increment all Ti, Ti by and loop.\nWe continue until the algorithm finds a solution, or Ti \u2265 Pmax.\nSuppose that there exists a Nash equilibrium s that satisfies |EPVi (s) \u2212 EPVj (s)| \u2264 \u03be for all i, j = 1, ... , n. Set r = mini=1,...,n EPVi (s); we have r \u2264 EPVi (s) \u2264 r + \u03be for all i = 1, ... , n.\nThere exists a k \u2265 0 such that \u2212Pmax + (k \u2212 1) \u2264 r \u2264 \u2212Pmax + k .\nDuring the kth step of the algorithm, we set T1 = \u00b7 \u00b7 \u00b7 = Tn = \u2212Pmax +(k\u22121) , i.e., we have r\u2212 \u2264 Ti \u2264 r, r + \u03be \u2264 Ti \u2264 r + \u03be + .\nThat is, the Nash equilibrium s satisfies Ti \u2264 r \u2264 EPVi (s) \u2264 r + \u03be \u2264 Ti , which means that when Ti is set to \u2212Pmax + (k \u2212 1) , our algorithm is guaranteed to output a Nash equilibrium t that satisfies r \u2212 2 \u2264 Ti \u2212 \u2264 EPVi (t) \u2264 Ti + \u2264 r +\u03be +2 .\nWe conclude that whenever such a Nash equilibrium s exists, our algorithm outputs a Nash equilibrium t that satisfies |EPVi (t) \u2212 EPVj (t)| \u2264 \u03be + 4 for all i, j = 1, ... , n.\nThe running time of this algorithm is O(max{nP3 max/ 4 , n4 / 4 }).\nNote also that we can find the smallest \u03be for which such a Nash equilibrium exists by combining this algorithm with binary search over the space \u03be = [0, 2Pmax].\nThis identifies an approximation to the fairest Nash equilibrium, i.e., one in which the players'' expected payoffs differ by the smallest possible amount.\nFinally, note that all results in this section can be extended to bounded-degree trees.\n7.\nCONCLUSIONS We have studied the problem of equilibrium selection in graphical games on bounded-degree trees.\nWe considered several criteria for selecting a Nash equilibrium, such as maximizing the social welfare, ensuring a lower bound on the expected payoff of each player, etc..\nFirst, we focused on the algebraic complexity of a social welfare-maximizing Nash equilibrium, and proved strong negative results for that problem.\nNamely, we showed that even for graphical games on paths, any algebraic number \u03b1 \u2208 [0, 1] may be the only strategy available to some player in all social welfaremaximizing Nash equilibria.\nThis is in sharp contrast with the fact that graphical games on trees always possess a Nash equilibrium in which all players'' strategies are rational numbers.\nWe then provided approximation algorithms for selecting Nash equilibria with special properties.\nWhile the problem of finding approximate Nash equilibria for various classes of games has received a lot of attention in recent years, most of the existing work aims to find -Nash equilibria that satisfy (or are -close to satisfying) certain properties.\nOur approach is different in that we insist on outputting an exact Nash equilibrium, which is -close to satisfying a given requirement.\nAs argued in the introduction, there are several reasons to prefer a solution that constitutes an exact Nash equilibrium.\nOur algorithms are fully polynomial time approximation schemes, i.e., their running time is polynomial in the inverse of the approximation parameter , though they may be pseudopolynomial with respect to the input size.\nUnder mild restrictions on the inputs, they can be modified to be truly polynomial.\nThis is the strongest positive result one can derive for a problem whose exact solutions may be hard to represent, as is the case for many of the problems considered here.\nWhile we prove our results for games on a path, they can be generalized to any tree for which the best response policies have compact representations as unions of rectangles.\nIn the full version of the paper we describe our algorithms for the general case.\nFurther work in this vein could include extensions to the kinds of guarantees sought for Nash equilibria, such as guaranteeing total payoffs for subsets of players, selecting equilibria in which some players are receiving significantly higher payoffs than their peers, etc..\nAt the moment however, it is perhaps more important to inves170 tigate whether Nash equilibria of graphical games can be computed in a decentralized manner, in contrast to the algorithms we have introduced here.\nIt is natural to ask if our results or those of [9] can be generalized to games with three or more actions.\nHowever, it seems that this will make the analysis significantly more difficult.\nIn particular, note that one can view the bounded payoff games as a very limited special case of games with three actions per player.\nNamely, given a two-action game with payoff bounds, consider a game in which each player Vi has a third action that guarantees him a payoff of Ti no matter what everyone else does.\nThen checking if there is a Nash equilibrium in which none of the players assigns a nonzero probability to his third action is equivalent to checking if there exists a Nash equilibrium that satisfies the payoff bounds in the original game, and Section 5.1 shows that finding an exact solution to this problem requires new ideas.\nAlternatively it may be interesting to look for similar results in the context of correlated equilibria (CE), especially since the best CE may have higher value (total expected payoff) than the best NE.\nThe ratio between these values is called the mediation value in [1].\nIt is known from [1] that the mediation value of 2-player, 2-action games with non-negative payoffs is at most 4 3 , and they exhibit a 3-player game for which it is infinite.\nFurthermore, a 2-player, 3action example from [1] also has infinite mediation value.\n8.\nREFERENCES [1] I. Ashlagi, D. Monderer and M. Tenneholtz, On the Value of Correlation, Proceedings of Dagstuhl seminar 05011 (2005) [2] R. Aumann, Subjectivity and Correlation in Randomized Strategies, Journal of Mathematical Economics 1 pp. 67-96 (1974) [3] B. Blum, C. R. Shelton, and D. Koller, A Continuation Method for Nash Equilibria in Structured Games, Proceedings of IJCAI``03 [4] X. Chen, X. Deng and S. Teng, Computing Nash Equilibria: Approximation and Smoothed Complexity, Proceedings of FOCS``06 [5] X. Chen, X. Deng, Settling the Complexity of 2-Player Nash-Equilibrium, Proceedings of FOCS``06 [6] V. Conitzer and T. Sandholm, Complexity Results about Nash Equilibria, Proceedings of IJCAI``03 [7] C. Daskalakis, P. W. Goldberg and C. H. Papadimitriou, The Complexity of Computing a Nash Equilibrium, Proceedings of STOC``06 [8] R. S. Datta, Universality of Nash Equilibria, Mathematics of Operations Research 28:3, 2003 [9] E. Elkind, L. A. Goldberg, and P. W. Goldberg, Nash Equilibria in Graphical games on Trees Revisited, Proceedings of ACM EC``06 [10] E. Elkind, L. A. Goldberg, and P. W. Goldberg, Computing Good Nash Equilibria in Graphical Games, http://arxiv.org/abs/cs.GT/0703133 [11] I. Gilboa and E. Zemel, Nash and Correlated Equilibria: Some Complexity Considerations, Games and Economic Behavior, 1 pp. 80-93 (1989) [12] P. W. Goldberg and C. H. Papadimitriou, Reducibility Among Equilibrium Problems, Proceedings of STOC``06 [13] M. Kearns, M. Littman, and S. Singh, Graphical Models for Game Theory, Proceedings of UAI``01 [14] M. Littman, M. Kearns, and S. Singh, An Efficient Exact Algorithm for Singly Connected Graphical Games, Proceedings of NIPS``01 [15] R. Lipton and E. Markakis, Nash Equilibria via Polynomial Equations, Proceedings of LATIN``04 [16] L. Ortiz and M. Kearns, Nash Propagation for Loopy Graphical Games, Proceedings of NIPS``03 [17] C.H. Papadimitriou, Computing Correlated Equilibria in Multi-Player Games, Proceedings of STOC``05 [18] C.H. Papadimitriou and T. Roughgarden, Computing Equilibria in Multi-Player Games, Proceedings of SODA``05 [19] D. Vickrey and D. Koller, Multi-agent Algorithms for Solving Graphical Games, Proceedings of AAAI``02 171", "lvl-3": "Computing Good Nash Equilibria in Graphical Games *\nABSTRACT\nThis paper addresses the problem of fair equilibrium selection in graphical games .\nOur approach is based on the data structure called the best response policy , which was proposed by Kearns et al. [ 13 ] as a way to represent all Nash equilibria of a graphical game .\nIn [ 9 ] , it was shown that the best response policy has polynomial size as long as the underlying graph is a path .\nIn this paper , we show that if the underlying graph is a bounded-degree tree and the best response policy has polynomial size then there is an efficient algorithm which constructs a Nash equilibrium that guarantees certain payoffs to all participants .\nAnother attractive solution concept is a Nash equilibrium that maximizes the social welfare .\nWe show that , while exactly computing the latter is infeasible ( we prove that solving this problem may involve algebraic numbers of an arbitrarily high degree ) , there exists an FPTAS for finding such an equilibrium as long as the best response policy has polynomial size .\nThese two algorithms can be combined to produce Nash equilibria that satisfy various fairness criteria .\n1 .\nINTRODUCTION\nIn a large community of agents , an agent 's behavior is not likely to have a direct effect on most other agents : rather , it is just the * Supported by the EPSRC research grants `` Algorithmics of Network-sharing Games '' and `` Discontinuous Behaviour in the Complexity of randomized Algorithms '' .\nagents who are close enough to him that will be affected .\nHowever , as these agents respond by adapting their behavior , more agents will feel the consequences and eventually the choices made by a single agent will propagate throughout the entire community .\nThis is the intuition behind graphical games , which were introduced by Kearns , Littman and Singh in [ 13 ] as a compact representation scheme for games with many players .\nIn an n-player graphical game , each player is associated with a vertex of an underlying graph G , and the payoffs of each player depend on his action as well as on the actions of his neighbors in the graph .\nIf the maximum degree of G is \u0394 , and each player has two actions available to him , then the game can be represented using n2\u0394 +1 numbers .\nIn contrast , we need n2n numbers to represent a general n-player 2-action game , which is only practical for small values of n. For graphical games with constant \u0394 , the size of the game is linear in n .\nOne of the most natural problems for a graphical game is that of finding a Nash equilibrium , the existence of which follows from Nash 's celebrated theorem ( as graphical games are just a special case of n-player games ) .\nThe first attempt to tackle this problem was made in [ 13 ] , where the authors consider graphical games with two actions per player in which the underlying graph is a boundeddegree tree .\nThey propose a generic algorithm for finding Nash equilibria that can be specialized in two ways : an exponential-time algorithm for finding an ( exact ) Nash equilibrium , and a fully polynomial time approximation scheme ( FPTAS ) for finding an approximation to a Nash equilibrium .\nFor any e > 0 this algorithm outputs an e-Nash equilibrium , which is a strategy profile in which no player can improve his payoff by more than e by unilaterally changing his strategy .\nWhile e-Nash equilibria are often easier to compute than exact Nash equilibria , this solution concept has several drawbacks .\nFirst , the players may be sensitive to a small loss in payoffs , so the strategy profile that is an e-Nash equilibrium will not be stable .\nThis will be the case even if there is only a small subset of players who are extremely price-sensitive , and for a large population of players it may be difficult to choose a value of a that will satisfy everyone .\nSecond , the strategy profiles that are close to being Nash equilibria may be much better with respect to the properties under consideration than exact Nash equilibria .\nTherefore , the ( approximation to the ) value of the best solution that corresponds to an e-Nash equilibrium may not be indicative of what can be achieved under an exact Nash equilibrium .\nThis is especially important if the purpose of the approximate solution is to provide a good benchmark for a system of selfish agents , as the benchmark implied by an e-Nash equilibrium may be unrealistic .\nFor these reasons , in this paper we focus on the problem of computing exact Nash equilibria .\nBuilding on ideas of [ 14 ] , Elkind et al. [ 9 ] showed how to find an ( exact ) Nash equilibrium in polynomial time when the underlying\ngraph has degree 2 ( that is , when the graph is a collection of paths and cycles ) .\nBy contrast , finding a Nash equilibrium in a general degree-bounded graph appears to be computationally intractable : it has been shown ( see [ 5 , 12 , 7 ] ) to be complete for the complexity class PPAD .\n[ 9 ] extends this hardness result to the case in which the underlying graph has bounded pathwidth .\nA graphical game may not have a unique Nash equilibrium , indeed it may have exponentially many .\nMoreover , some Nash equilibria are more desirable than others .\nRather than having an algorithm which merely finds some Nash equilibrium , we would like to have algorithms for finding Nash equilibria with various sociallydesirable properties , such as maximizing overall payoff or distributing profit fairly .\nA useful property of the data structure of [ 13 ] is that it simultaneously represents the set of all Nash equilibria of the underlying game .\nIf this representation has polynomial size ( as is the case for paths , as shown in [ 9 ] ) , one may hope to extract from it a Nash equilibrium with the desired properties .\nIn fact , in [ 13 ] the authors mention that this is indeed possible if one is interested in finding an ( approximate ) a-Nash equilibrium .\nThe goal of this paper is to extend this to exact Nash equilibria .\n1.1 Our Results\nIn this paper , we study n-player 2-action graphical games on bounded-degree trees for which the data structure of [ 13 ] has size poly ( n ) .\nWe focus on the problem of finding exact Nash equilibria with certain socially-desirable properties .\nIn particular , we show how to find a Nash equilibrium that ( nearly ) maximizes the social welfare , i.e. , the sum of the players ' payoffs , and we show how to find a Nash equilibrium that ( nearly ) satisfies prescribed payoff bounds for all players .\nGraphical games on bounded-degree trees have a simple algebraic structure .\nOne attractive feature , which follows from [ 13 ] , is that every such game has a Nash equilibrium in which the strategy of every player is a rational number .\nSection 3 studies the algebraic structure of those Nash equilibria that maximize social welfare .\nWe show ( Theorems 1 and 2 ) that , surprisingly , the set of Nash equilibria that maximize social welfare is more complex .\nIn fact , for any algebraic number \u03b1 \u2208 [ 0 , 1 ] with degree at most n , we exhibit a graphical game on a path of length O ( n ) such that , in the unique social welfare-maximizing Nash equilibrium of this game , one of the players plays the mixed strategy \u03b1 .1 This result shows that it may be difficult to represent an optimal Nash equilibrium .\nIt seems to be a novel feature of the setting we consider here , that an optimal Nash equilibrium is hard to represent , in a situation where it is easy to find and represent a Nash equilibrium .\nAs the social welfare-maximizing Nash equilibrium may be hard to represent efficiently , we have to settle for an approximation .\nHowever , the crucial difference between our approach and that of previous papers [ 13 , 16 , 19 ] is that we require our algorithm to output an exact Nash equilibrium , though not necessarily the optimal one with respect to our criteria .\nIn Section 4 , we describe an algorithm that satisfies this requirement .\nNamely , we propose an algorithm that for any e > 0 finds a Nash equilibrium whose total payoff is within a of optimal .\nIt runs in polynomial time ( Theorem 3,4 ) for any graphical game on a bounded-degree tree for which the data structure proposed by [ 13 ] ( the so-called best response policy , defined below ) is of size poly ( n ) ( note that , as shown in [ 9 ] , this is always the case when the underlying graph is a path ) .\nMore pre1A related result in a different context was obtained by Datta [ 8 ] , who shows that n-player 2-action games are universal in the sense that any real algebraic variety can be represented as the set of totally mixed Nash equilibria of such games .\ncisely , the running time of our algorithm is polynomial in n , Pmax , and 1/e , where Pmax is the maximum absolute value of an entry of a payoff matrix , i.e. , it is a pseudopolynomial algorithm , though it is fully polynomial with respect to E .\nWe show ( Section 4.1 ) that under some restrictions on the payoff matrices , the algorithm can be transformed into a ( truly ) polynomial-time algorithm that outputs a Nash equilibrium whose total payoff is within a 1 \u2212 e factor from the optimal .\nIn Section 5 , we consider the problem of finding a Nash equilibrium in which the expected payoff of each player Vi exceeds a prescribed threshold Ti .\nUsing the idea from Section 4 we give ( Theorem 5 ) a fully polynomial time approximation scheme for this problem .\nThe running time of the algorithm is bounded by a polynomial in n , Pmax , and E .\nIf the instance has a Nash equilibrium satisfying the prescribed thresholds then the algorithm constructs a Nash equilibrium in which the expected payoff of each player Vi is at least Ti \u2212 E .\nIn Section 6 , we introduce other natural criteria for selecting a `` good '' Nash equilibrium and we show that the algorithms described in the two previous sections can be used as building blocks in finding Nash equilibria that satisfy these criteria .\nIn particular , in Section 6.1 we show how to find a Nash equilibrium that approximates the maximum social welfare , while guaranteeing that each individual payoff is close to a prescribed threshold .\nIn Section 6.2 we show how to find a Nash equilibrium that ( nearly ) maximizes the minimum individual payoff .\nFinally , in Section 6.3 we show how to find a Nash equilibrium in which the individual payoffs of the players are close to each other .\n1.2 Related Work\nOur approximation scheme ( Theorem 3 and Theorem 4 ) shows a contrast between the games that we study and two-player n-action games , for which the corresponding problems are usually intractable .\nFor two-player n-action games , the problem of finding Nash equilibria with special properties is typically NP-hard .\nIn particular , this is the case for Nash equilibria that maximize the social welfare [ 11 , 6 ] .\nMoreover , it is likely to be intractable even to approximate such equilibria .\nIn particular , Chen , Deng and Teng [ 4 ] show that there exists some e , inverse polynomial in n , for which computing an e-Nash equilibrium in 2-player games with n actions per player is PPAD-complete .\nLipton and Markakis [ 15 ] study the algebraic properties of Nash equilibria , and point out that standard quantifier elimination algorithms can be used to solve them .\nNote that these algorithms are not polynomial-time in general .\nThe games we study in this paper have polynomial-time computable Nash equilibria in which all mixed strategies are rational numbers , but an optimal Nash equilibrium may necessarily include mixed strategies with high algebraic degree .\nA correlated equilibrium ( CE ) ( introduced by Aumann [ 2 ] ) is a distribution over vectors of players ' actions with the property that if any player is told his own action ( the value of his own component ) from a vector generated by that distribution , then he can not increase his expected payoff by changing his action .\nAny Nash equilibrium is a CE but the converse does not hold in general .\nIn contrast with Nash equilibria , correlated equilibria can be found for low-degree graphical games ( as well as other classes of conciselyrepresented multiplayer games ) in polynomial time [ 17 ] .\nBut , for graphical games it is NP-hard to find a correlated equilibrium that maximizes total payoff [ 18 ] .\nHowever , the NP-hardness results apply to more general games than the one we consider here , in particular the graphs are not trees .\nFrom [ 2 ] it is also known that there exist 2-player , 2-action games for which the expected total payoff\nof the best correlated equilibrium is higher than the best Nash equilibrium , and we discuss this issue further in Section 7 .\n2 .\nPRELIMINARIES AND NOTATION\n3 .\nNASH EQUILIBRIA THAT MAXIMIZE THE SOCIAL WELFARE : SOLUTIONS IN R \\ Q\n3.1 Warm-up : quadratic irrationalities\n3.2 Strategies of arbitrary degree\n4 .\nAPPROXIMATING THE SOCIALLY OPTIMAL NASH EQUILIBRIUM\nDefine ml , k\n4.1 A polynomial-time algorithm for multiplicative approximation\n5 .\nBOUNDED PAYOFF NASH EQUILIBRIA\n5.1 Exact Computation\n6 .\nOTHER CRITERIA FOR SELECTING A NASH EQUILIBRIUM\n6.1 Combining welfare maximization with bounds on payoffs\n6.2 A minimax approach\n6.3 Equalizing the payoffs\n7 .\nCONCLUSIONS\nWe have studied the problem of equilibrium selection in graphical games on bounded-degree trees .\nWe considered several criteria for selecting a Nash equilibrium , such as maximizing the social welfare , ensuring a lower bound on the expected payoff of each player , etc. .\nFirst , we focused on the algebraic complexity of a social welfare-maximizing Nash equilibrium , and proved strong negative results for that problem .\nNamely , we showed that even for graphical games on paths , any algebraic number \u03b1 E [ 0 , 1 ] may be the only strategy available to some player in all social welfaremaximizing Nash equilibria .\nThis is in sharp contrast with the fact that graphical games on trees always possess a Nash equilibrium in which all players ' strategies are rational numbers .\nWe then provided approximation algorithms for selecting Nash equilibria with special properties .\nWhile the problem of finding approximate Nash equilibria for various classes of games has received a lot of attention in recent years , most of the existing work aims to find E-Nash equilibria that satisfy ( or are E-close to satisfying ) certain properties .\nOur approach is different in that we insist on outputting an exact Nash equilibrium , which is E-close to satisfying a given requirement .\nAs argued in the introduction , there are several reasons to prefer a solution that constitutes an exact Nash equilibrium .\nOur algorithms are fully polynomial time approximation schemes , i.e. , their running time is polynomial in the inverse of the approximation parameter E , though they may be pseudopolynomial with respect to the input size .\nUnder mild restrictions on the inputs , they can be modified to be truly polynomial .\nThis is the strongest positive result one can derive for a problem whose exact solutions may be hard to represent , as is the case for many of the problems considered here .\nWhile we prove our results for games on a path , they can be generalized to any tree for which the best response policies have compact representations as unions of rectangles .\nIn the full version of the paper we describe our algorithms for the general case .\nFurther work in this vein could include extensions to the kinds of guarantees sought for Nash equilibria , such as guaranteeing total payoffs for subsets of players , selecting equilibria in which some players are receiving significantly higher payoffs than their peers , etc. .\nAt the moment however , it is perhaps more important to inves\ntigate whether Nash equilibria of graphical games can be computed in a decentralized manner , in contrast to the algorithms we have introduced here .\nIt is natural to ask if our results or those of [ 9 ] can be generalized to games with three or more actions .\nHowever , it seems that this will make the analysis significantly more difficult .\nIn particular , note that one can view the bounded payoff games as a very limited special case of games with three actions per player .\nNamely , given a two-action game with payoff bounds , consider a game in which each player Vi has a third action that guarantees him a payoff of Ti no matter what everyone else does .\nThen checking if there is a Nash equilibrium in which none of the players assigns a nonzero probability to his third action is equivalent to checking if there exists a Nash equilibrium that satisfies the payoff bounds in the original game , and Section 5.1 shows that finding an exact solution to this problem requires new ideas .\nAlternatively it may be interesting to look for similar results in the context of correlated equilibria ( CE ) , especially since the best CE may have higher value ( total expected payoff ) than the best NE .\nThe ratio between these values is called the mediation value in [ 1 ] .\nIt is known from [ 1 ] that the mediation value of 2-player , 2-action games with non-negative payoffs is at most 43 , and they exhibit a 3-player game for which it is infinite .\nFurthermore , a 2-player , 3action example from [ 1 ] also has infinite mediation value .", "lvl-4": "Computing Good Nash Equilibria in Graphical Games *\nABSTRACT\nThis paper addresses the problem of fair equilibrium selection in graphical games .\nOur approach is based on the data structure called the best response policy , which was proposed by Kearns et al. [ 13 ] as a way to represent all Nash equilibria of a graphical game .\nIn [ 9 ] , it was shown that the best response policy has polynomial size as long as the underlying graph is a path .\nIn this paper , we show that if the underlying graph is a bounded-degree tree and the best response policy has polynomial size then there is an efficient algorithm which constructs a Nash equilibrium that guarantees certain payoffs to all participants .\nAnother attractive solution concept is a Nash equilibrium that maximizes the social welfare .\nWe show that , while exactly computing the latter is infeasible ( we prove that solving this problem may involve algebraic numbers of an arbitrarily high degree ) , there exists an FPTAS for finding such an equilibrium as long as the best response policy has polynomial size .\nThese two algorithms can be combined to produce Nash equilibria that satisfy various fairness criteria .\n1 .\nINTRODUCTION\nThis is the intuition behind graphical games , which were introduced by Kearns , Littman and Singh in [ 13 ] as a compact representation scheme for games with many players .\nIn an n-player graphical game , each player is associated with a vertex of an underlying graph G , and the payoffs of each player depend on his action as well as on the actions of his neighbors in the graph .\nIf the maximum degree of G is \u0394 , and each player has two actions available to him , then the game can be represented using n2\u0394 +1 numbers .\nIn contrast , we need n2n numbers to represent a general n-player 2-action game , which is only practical for small values of n. For graphical games with constant \u0394 , the size of the game is linear in n .\nOne of the most natural problems for a graphical game is that of finding a Nash equilibrium , the existence of which follows from Nash 's celebrated theorem ( as graphical games are just a special case of n-player games ) .\nThe first attempt to tackle this problem was made in [ 13 ] , where the authors consider graphical games with two actions per player in which the underlying graph is a boundeddegree tree .\nThey propose a generic algorithm for finding Nash equilibria that can be specialized in two ways : an exponential-time algorithm for finding an ( exact ) Nash equilibrium , and a fully polynomial time approximation scheme ( FPTAS ) for finding an approximation to a Nash equilibrium .\nFor any e > 0 this algorithm outputs an e-Nash equilibrium , which is a strategy profile in which no player can improve his payoff by more than e by unilaterally changing his strategy .\nWhile e-Nash equilibria are often easier to compute than exact Nash equilibria , this solution concept has several drawbacks .\nFirst , the players may be sensitive to a small loss in payoffs , so the strategy profile that is an e-Nash equilibrium will not be stable .\nSecond , the strategy profiles that are close to being Nash equilibria may be much better with respect to the properties under consideration than exact Nash equilibria .\nTherefore , the ( approximation to the ) value of the best solution that corresponds to an e-Nash equilibrium may not be indicative of what can be achieved under an exact Nash equilibrium .\nThis is especially important if the purpose of the approximate solution is to provide a good benchmark for a system of selfish agents , as the benchmark implied by an e-Nash equilibrium may be unrealistic .\nFor these reasons , in this paper we focus on the problem of computing exact Nash equilibria .\nBuilding on ideas of [ 14 ] , Elkind et al. [ 9 ] showed how to find an ( exact ) Nash equilibrium in polynomial time when the underlying\nBy contrast , finding a Nash equilibrium in a general degree-bounded graph appears to be computationally intractable : it has been shown ( see [ 5 , 12 , 7 ] ) to be complete for the complexity class PPAD .\n[ 9 ] extends this hardness result to the case in which the underlying graph has bounded pathwidth .\nA graphical game may not have a unique Nash equilibrium , indeed it may have exponentially many .\nMoreover , some Nash equilibria are more desirable than others .\nRather than having an algorithm which merely finds some Nash equilibrium , we would like to have algorithms for finding Nash equilibria with various sociallydesirable properties , such as maximizing overall payoff or distributing profit fairly .\nA useful property of the data structure of [ 13 ] is that it simultaneously represents the set of all Nash equilibria of the underlying game .\nIf this representation has polynomial size ( as is the case for paths , as shown in [ 9 ] ) , one may hope to extract from it a Nash equilibrium with the desired properties .\nIn fact , in [ 13 ] the authors mention that this is indeed possible if one is interested in finding an ( approximate ) a-Nash equilibrium .\nThe goal of this paper is to extend this to exact Nash equilibria .\n1.1 Our Results\nIn this paper , we study n-player 2-action graphical games on bounded-degree trees for which the data structure of [ 13 ] has size poly ( n ) .\nWe focus on the problem of finding exact Nash equilibria with certain socially-desirable properties .\nIn particular , we show how to find a Nash equilibrium that ( nearly ) maximizes the social welfare , i.e. , the sum of the players ' payoffs , and we show how to find a Nash equilibrium that ( nearly ) satisfies prescribed payoff bounds for all players .\nGraphical games on bounded-degree trees have a simple algebraic structure .\nOne attractive feature , which follows from [ 13 ] , is that every such game has a Nash equilibrium in which the strategy of every player is a rational number .\nSection 3 studies the algebraic structure of those Nash equilibria that maximize social welfare .\nWe show ( Theorems 1 and 2 ) that , surprisingly , the set of Nash equilibria that maximize social welfare is more complex .\nIt seems to be a novel feature of the setting we consider here , that an optimal Nash equilibrium is hard to represent , in a situation where it is easy to find and represent a Nash equilibrium .\nAs the social welfare-maximizing Nash equilibrium may be hard to represent efficiently , we have to settle for an approximation .\nHowever , the crucial difference between our approach and that of previous papers [ 13 , 16 , 19 ] is that we require our algorithm to output an exact Nash equilibrium , though not necessarily the optimal one with respect to our criteria .\nIn Section 4 , we describe an algorithm that satisfies this requirement .\nNamely , we propose an algorithm that for any e > 0 finds a Nash equilibrium whose total payoff is within a of optimal .\nMore pre1A related result in a different context was obtained by Datta [ 8 ] , who shows that n-player 2-action games are universal in the sense that any real algebraic variety can be represented as the set of totally mixed Nash equilibria of such games .\nWe show ( Section 4.1 ) that under some restrictions on the payoff matrices , the algorithm can be transformed into a ( truly ) polynomial-time algorithm that outputs a Nash equilibrium whose total payoff is within a 1 \u2212 e factor from the optimal .\nIn Section 5 , we consider the problem of finding a Nash equilibrium in which the expected payoff of each player Vi exceeds a prescribed threshold Ti .\nUsing the idea from Section 4 we give ( Theorem 5 ) a fully polynomial time approximation scheme for this problem .\nThe running time of the algorithm is bounded by a polynomial in n , Pmax , and E .\nIf the instance has a Nash equilibrium satisfying the prescribed thresholds then the algorithm constructs a Nash equilibrium in which the expected payoff of each player Vi is at least Ti \u2212 E .\nIn Section 6 , we introduce other natural criteria for selecting a `` good '' Nash equilibrium and we show that the algorithms described in the two previous sections can be used as building blocks in finding Nash equilibria that satisfy these criteria .\nIn particular , in Section 6.1 we show how to find a Nash equilibrium that approximates the maximum social welfare , while guaranteeing that each individual payoff is close to a prescribed threshold .\nIn Section 6.2 we show how to find a Nash equilibrium that ( nearly ) maximizes the minimum individual payoff .\nFinally , in Section 6.3 we show how to find a Nash equilibrium in which the individual payoffs of the players are close to each other .\n1.2 Related Work\nOur approximation scheme ( Theorem 3 and Theorem 4 ) shows a contrast between the games that we study and two-player n-action games , for which the corresponding problems are usually intractable .\nFor two-player n-action games , the problem of finding Nash equilibria with special properties is typically NP-hard .\nIn particular , this is the case for Nash equilibria that maximize the social welfare [ 11 , 6 ] .\nMoreover , it is likely to be intractable even to approximate such equilibria .\nIn particular , Chen , Deng and Teng [ 4 ] show that there exists some e , inverse polynomial in n , for which computing an e-Nash equilibrium in 2-player games with n actions per player is PPAD-complete .\nLipton and Markakis [ 15 ] study the algebraic properties of Nash equilibria , and point out that standard quantifier elimination algorithms can be used to solve them .\nNote that these algorithms are not polynomial-time in general .\nThe games we study in this paper have polynomial-time computable Nash equilibria in which all mixed strategies are rational numbers , but an optimal Nash equilibrium may necessarily include mixed strategies with high algebraic degree .\nAny Nash equilibrium is a CE but the converse does not hold in general .\nIn contrast with Nash equilibria , correlated equilibria can be found for low-degree graphical games ( as well as other classes of conciselyrepresented multiplayer games ) in polynomial time [ 17 ] .\nBut , for graphical games it is NP-hard to find a correlated equilibrium that maximizes total payoff [ 18 ] .\nHowever , the NP-hardness results apply to more general games than the one we consider here , in particular the graphs are not trees .\nFrom [ 2 ] it is also known that there exist 2-player , 2-action games for which the expected total payoff\nof the best correlated equilibrium is higher than the best Nash equilibrium , and we discuss this issue further in Section 7 .\n7 .\nCONCLUSIONS\nWe have studied the problem of equilibrium selection in graphical games on bounded-degree trees .\nWe considered several criteria for selecting a Nash equilibrium , such as maximizing the social welfare , ensuring a lower bound on the expected payoff of each player , etc. .\nFirst , we focused on the algebraic complexity of a social welfare-maximizing Nash equilibrium , and proved strong negative results for that problem .\nNamely , we showed that even for graphical games on paths , any algebraic number \u03b1 E [ 0 , 1 ] may be the only strategy available to some player in all social welfaremaximizing Nash equilibria .\nThis is in sharp contrast with the fact that graphical games on trees always possess a Nash equilibrium in which all players ' strategies are rational numbers .\nWe then provided approximation algorithms for selecting Nash equilibria with special properties .\nWhile the problem of finding approximate Nash equilibria for various classes of games has received a lot of attention in recent years , most of the existing work aims to find E-Nash equilibria that satisfy ( or are E-close to satisfying ) certain properties .\nOur approach is different in that we insist on outputting an exact Nash equilibrium , which is E-close to satisfying a given requirement .\nAs argued in the introduction , there are several reasons to prefer a solution that constitutes an exact Nash equilibrium .\nWhile we prove our results for games on a path , they can be generalized to any tree for which the best response policies have compact representations as unions of rectangles .\nIn the full version of the paper we describe our algorithms for the general case .\nFurther work in this vein could include extensions to the kinds of guarantees sought for Nash equilibria , such as guaranteeing total payoffs for subsets of players , selecting equilibria in which some players are receiving significantly higher payoffs than their peers , etc. .\nAt the moment however , it is perhaps more important to inves\ntigate whether Nash equilibria of graphical games can be computed in a decentralized manner , in contrast to the algorithms we have introduced here .\nIt is natural to ask if our results or those of [ 9 ] can be generalized to games with three or more actions .\nHowever , it seems that this will make the analysis significantly more difficult .\nIn particular , note that one can view the bounded payoff games as a very limited special case of games with three actions per player .\nNamely , given a two-action game with payoff bounds , consider a game in which each player Vi has a third action that guarantees him a payoff of Ti no matter what everyone else does .\nThen checking if there is a Nash equilibrium in which none of the players assigns a nonzero probability to his third action is equivalent to checking if there exists a Nash equilibrium that satisfies the payoff bounds in the original game , and Section 5.1 shows that finding an exact solution to this problem requires new ideas .\nAlternatively it may be interesting to look for similar results in the context of correlated equilibria ( CE ) , especially since the best CE may have higher value ( total expected payoff ) than the best NE .\nIt is known from [ 1 ] that the mediation value of 2-player , 2-action games with non-negative payoffs is at most 43 , and they exhibit a 3-player game for which it is infinite .", "lvl-2": "Computing Good Nash Equilibria in Graphical Games *\nABSTRACT\nThis paper addresses the problem of fair equilibrium selection in graphical games .\nOur approach is based on the data structure called the best response policy , which was proposed by Kearns et al. [ 13 ] as a way to represent all Nash equilibria of a graphical game .\nIn [ 9 ] , it was shown that the best response policy has polynomial size as long as the underlying graph is a path .\nIn this paper , we show that if the underlying graph is a bounded-degree tree and the best response policy has polynomial size then there is an efficient algorithm which constructs a Nash equilibrium that guarantees certain payoffs to all participants .\nAnother attractive solution concept is a Nash equilibrium that maximizes the social welfare .\nWe show that , while exactly computing the latter is infeasible ( we prove that solving this problem may involve algebraic numbers of an arbitrarily high degree ) , there exists an FPTAS for finding such an equilibrium as long as the best response policy has polynomial size .\nThese two algorithms can be combined to produce Nash equilibria that satisfy various fairness criteria .\n1 .\nINTRODUCTION\nIn a large community of agents , an agent 's behavior is not likely to have a direct effect on most other agents : rather , it is just the * Supported by the EPSRC research grants `` Algorithmics of Network-sharing Games '' and `` Discontinuous Behaviour in the Complexity of randomized Algorithms '' .\nagents who are close enough to him that will be affected .\nHowever , as these agents respond by adapting their behavior , more agents will feel the consequences and eventually the choices made by a single agent will propagate throughout the entire community .\nThis is the intuition behind graphical games , which were introduced by Kearns , Littman and Singh in [ 13 ] as a compact representation scheme for games with many players .\nIn an n-player graphical game , each player is associated with a vertex of an underlying graph G , and the payoffs of each player depend on his action as well as on the actions of his neighbors in the graph .\nIf the maximum degree of G is \u0394 , and each player has two actions available to him , then the game can be represented using n2\u0394 +1 numbers .\nIn contrast , we need n2n numbers to represent a general n-player 2-action game , which is only practical for small values of n. For graphical games with constant \u0394 , the size of the game is linear in n .\nOne of the most natural problems for a graphical game is that of finding a Nash equilibrium , the existence of which follows from Nash 's celebrated theorem ( as graphical games are just a special case of n-player games ) .\nThe first attempt to tackle this problem was made in [ 13 ] , where the authors consider graphical games with two actions per player in which the underlying graph is a boundeddegree tree .\nThey propose a generic algorithm for finding Nash equilibria that can be specialized in two ways : an exponential-time algorithm for finding an ( exact ) Nash equilibrium , and a fully polynomial time approximation scheme ( FPTAS ) for finding an approximation to a Nash equilibrium .\nFor any e > 0 this algorithm outputs an e-Nash equilibrium , which is a strategy profile in which no player can improve his payoff by more than e by unilaterally changing his strategy .\nWhile e-Nash equilibria are often easier to compute than exact Nash equilibria , this solution concept has several drawbacks .\nFirst , the players may be sensitive to a small loss in payoffs , so the strategy profile that is an e-Nash equilibrium will not be stable .\nThis will be the case even if there is only a small subset of players who are extremely price-sensitive , and for a large population of players it may be difficult to choose a value of a that will satisfy everyone .\nSecond , the strategy profiles that are close to being Nash equilibria may be much better with respect to the properties under consideration than exact Nash equilibria .\nTherefore , the ( approximation to the ) value of the best solution that corresponds to an e-Nash equilibrium may not be indicative of what can be achieved under an exact Nash equilibrium .\nThis is especially important if the purpose of the approximate solution is to provide a good benchmark for a system of selfish agents , as the benchmark implied by an e-Nash equilibrium may be unrealistic .\nFor these reasons , in this paper we focus on the problem of computing exact Nash equilibria .\nBuilding on ideas of [ 14 ] , Elkind et al. [ 9 ] showed how to find an ( exact ) Nash equilibrium in polynomial time when the underlying\ngraph has degree 2 ( that is , when the graph is a collection of paths and cycles ) .\nBy contrast , finding a Nash equilibrium in a general degree-bounded graph appears to be computationally intractable : it has been shown ( see [ 5 , 12 , 7 ] ) to be complete for the complexity class PPAD .\n[ 9 ] extends this hardness result to the case in which the underlying graph has bounded pathwidth .\nA graphical game may not have a unique Nash equilibrium , indeed it may have exponentially many .\nMoreover , some Nash equilibria are more desirable than others .\nRather than having an algorithm which merely finds some Nash equilibrium , we would like to have algorithms for finding Nash equilibria with various sociallydesirable properties , such as maximizing overall payoff or distributing profit fairly .\nA useful property of the data structure of [ 13 ] is that it simultaneously represents the set of all Nash equilibria of the underlying game .\nIf this representation has polynomial size ( as is the case for paths , as shown in [ 9 ] ) , one may hope to extract from it a Nash equilibrium with the desired properties .\nIn fact , in [ 13 ] the authors mention that this is indeed possible if one is interested in finding an ( approximate ) a-Nash equilibrium .\nThe goal of this paper is to extend this to exact Nash equilibria .\n1.1 Our Results\nIn this paper , we study n-player 2-action graphical games on bounded-degree trees for which the data structure of [ 13 ] has size poly ( n ) .\nWe focus on the problem of finding exact Nash equilibria with certain socially-desirable properties .\nIn particular , we show how to find a Nash equilibrium that ( nearly ) maximizes the social welfare , i.e. , the sum of the players ' payoffs , and we show how to find a Nash equilibrium that ( nearly ) satisfies prescribed payoff bounds for all players .\nGraphical games on bounded-degree trees have a simple algebraic structure .\nOne attractive feature , which follows from [ 13 ] , is that every such game has a Nash equilibrium in which the strategy of every player is a rational number .\nSection 3 studies the algebraic structure of those Nash equilibria that maximize social welfare .\nWe show ( Theorems 1 and 2 ) that , surprisingly , the set of Nash equilibria that maximize social welfare is more complex .\nIn fact , for any algebraic number \u03b1 \u2208 [ 0 , 1 ] with degree at most n , we exhibit a graphical game on a path of length O ( n ) such that , in the unique social welfare-maximizing Nash equilibrium of this game , one of the players plays the mixed strategy \u03b1 .1 This result shows that it may be difficult to represent an optimal Nash equilibrium .\nIt seems to be a novel feature of the setting we consider here , that an optimal Nash equilibrium is hard to represent , in a situation where it is easy to find and represent a Nash equilibrium .\nAs the social welfare-maximizing Nash equilibrium may be hard to represent efficiently , we have to settle for an approximation .\nHowever , the crucial difference between our approach and that of previous papers [ 13 , 16 , 19 ] is that we require our algorithm to output an exact Nash equilibrium , though not necessarily the optimal one with respect to our criteria .\nIn Section 4 , we describe an algorithm that satisfies this requirement .\nNamely , we propose an algorithm that for any e > 0 finds a Nash equilibrium whose total payoff is within a of optimal .\nIt runs in polynomial time ( Theorem 3,4 ) for any graphical game on a bounded-degree tree for which the data structure proposed by [ 13 ] ( the so-called best response policy , defined below ) is of size poly ( n ) ( note that , as shown in [ 9 ] , this is always the case when the underlying graph is a path ) .\nMore pre1A related result in a different context was obtained by Datta [ 8 ] , who shows that n-player 2-action games are universal in the sense that any real algebraic variety can be represented as the set of totally mixed Nash equilibria of such games .\ncisely , the running time of our algorithm is polynomial in n , Pmax , and 1/e , where Pmax is the maximum absolute value of an entry of a payoff matrix , i.e. , it is a pseudopolynomial algorithm , though it is fully polynomial with respect to E .\nWe show ( Section 4.1 ) that under some restrictions on the payoff matrices , the algorithm can be transformed into a ( truly ) polynomial-time algorithm that outputs a Nash equilibrium whose total payoff is within a 1 \u2212 e factor from the optimal .\nIn Section 5 , we consider the problem of finding a Nash equilibrium in which the expected payoff of each player Vi exceeds a prescribed threshold Ti .\nUsing the idea from Section 4 we give ( Theorem 5 ) a fully polynomial time approximation scheme for this problem .\nThe running time of the algorithm is bounded by a polynomial in n , Pmax , and E .\nIf the instance has a Nash equilibrium satisfying the prescribed thresholds then the algorithm constructs a Nash equilibrium in which the expected payoff of each player Vi is at least Ti \u2212 E .\nIn Section 6 , we introduce other natural criteria for selecting a `` good '' Nash equilibrium and we show that the algorithms described in the two previous sections can be used as building blocks in finding Nash equilibria that satisfy these criteria .\nIn particular , in Section 6.1 we show how to find a Nash equilibrium that approximates the maximum social welfare , while guaranteeing that each individual payoff is close to a prescribed threshold .\nIn Section 6.2 we show how to find a Nash equilibrium that ( nearly ) maximizes the minimum individual payoff .\nFinally , in Section 6.3 we show how to find a Nash equilibrium in which the individual payoffs of the players are close to each other .\n1.2 Related Work\nOur approximation scheme ( Theorem 3 and Theorem 4 ) shows a contrast between the games that we study and two-player n-action games , for which the corresponding problems are usually intractable .\nFor two-player n-action games , the problem of finding Nash equilibria with special properties is typically NP-hard .\nIn particular , this is the case for Nash equilibria that maximize the social welfare [ 11 , 6 ] .\nMoreover , it is likely to be intractable even to approximate such equilibria .\nIn particular , Chen , Deng and Teng [ 4 ] show that there exists some e , inverse polynomial in n , for which computing an e-Nash equilibrium in 2-player games with n actions per player is PPAD-complete .\nLipton and Markakis [ 15 ] study the algebraic properties of Nash equilibria , and point out that standard quantifier elimination algorithms can be used to solve them .\nNote that these algorithms are not polynomial-time in general .\nThe games we study in this paper have polynomial-time computable Nash equilibria in which all mixed strategies are rational numbers , but an optimal Nash equilibrium may necessarily include mixed strategies with high algebraic degree .\nA correlated equilibrium ( CE ) ( introduced by Aumann [ 2 ] ) is a distribution over vectors of players ' actions with the property that if any player is told his own action ( the value of his own component ) from a vector generated by that distribution , then he can not increase his expected payoff by changing his action .\nAny Nash equilibrium is a CE but the converse does not hold in general .\nIn contrast with Nash equilibria , correlated equilibria can be found for low-degree graphical games ( as well as other classes of conciselyrepresented multiplayer games ) in polynomial time [ 17 ] .\nBut , for graphical games it is NP-hard to find a correlated equilibrium that maximizes total payoff [ 18 ] .\nHowever , the NP-hardness results apply to more general games than the one we consider here , in particular the graphs are not trees .\nFrom [ 2 ] it is also known that there exist 2-player , 2-action games for which the expected total payoff\nof the best correlated equilibrium is higher than the best Nash equilibrium , and we discuss this issue further in Section 7 .\n2 .\nPRELIMINARIES AND NOTATION\nWe consider graphical games in which the underlying graph G is an n-vertex tree , in which each vertex has at most \u0394 children .\nEach vertex has two actions , which are denoted by 0 and 1 .\nA mixed strategy of a player V is represented as a single number v E [ 0 , 1 ] , which denotes the probability that V selects action 1 .\nFor the purposes of the algorithm , the tree is rooted arbitrarily .\nFor convenience , we assume without loss of generality that the root has a single child , and that its payoff is independent of the action chosen by the child .\nThis can be achieved by first choosing an arbitrary root of the tree , and then adding a dummy `` parent '' of this root , giving the new parent a constant payoff function , e.g. , 0 .\nGiven an edge ( V , W ) of the tree G , and a mixed strategy w for W , let G ( V , W ) , W = w be the instance obtained from G by ( 1 ) deleting all nodes Z which are separated from V by W ( i.e. , all nodes Z such that the path from Z to V passes through W ) , and ( 2 ) restricting the instance so that W is required to play mixed strategy w.\nv is a mixed strategy for V and that w is a mixed strategy for W .\nWe say that v is a potential best response to w ( denoted by v E pbrV ( w ) ) if there is an equilibrium in the instance G ( V , W ) , W = w in which V has mixed strategy v .\nWe define the best response policy for V , given W , as B ( W , V ) = { ( w , v ) | v E pbrV ( w ) , w E [ 0 , 1 ] } .\nThe upstream pass of the generic algorithm of [ 13 ] considers every node V ( other than the root ) and computes the best response policy for V given its parent .\nWith the above assumptions about the root , the downstream pass is straightforward .\nThe root selects a mixed strategy w for the root W and a mixed strategy v E B ( W , V ) for each child V of W .\nIt instructs each child V to play v .\nThe remainder of the downward pass is recursive .\nWhen a node V is instructed by its parent to adopt mixed strategy v , it does the following for each child U -- It finds a pair ( v , u ) E B ( V , U ) ( with the same v value that it was given by its parent ) and instructs U to play u .\nThe best response policy for a vertex U given its parent V can be represented as a union of rectangles , where a rectangle is defined by a pair of closed intervals ( IV , IU ) and consists of all points in IV x IU ; it may be the case that one or both of the intervals IV and IU consists of a single point .\nIn order to perform computations on B ( V , U ) , and to bound the number of rectangles , [ 9 ] used the notion of an event point , which is defined as follows .\nFor any set A C _ [ 0 , 1 ] 2 that is represented as a union of a finite number of rectangles , we say that a point u E [ 0 , 1 ] on the U-axis is a Uevent point of A if u = 0 or u = 1 or the representation of A contains a rectangle of the form IV x IU and u is an endpoint of IU ; V - event points are defined similarly .\nFor many games considered in this paper , the underlying graph is an n-vertex path , i.e. , a graph G = ( V , E ) with V = { V1 , ... , Vn } and E = { ( V1 , V2 ) , ... , ( Vn_1 , Vn ) } .\nIn [ 9 ] , it was shown that for such games , the best response policy has only polynomially-many rectangles .\nThe proof that the number of rectangles in B ( Vj +1 , Vj ) is polynomial proceeds by first showing that the number of event points in B ( Vj +1 , Vj ) can not exceed the number of event points in B ( Vj , Vj_1 ) by more than 2 , and using this fact to bound the number of rectangles in B ( Vj +1 , Vj ) .\nLet P0 ( V ) and P1 ( V ) be the expected payoffs to V when it plays 0 and 1 , respectively .\nBoth P0 ( V ) and P1 ( V ) are multilinear functions of the strategies of V 's neighbors .\nIn what follows , we will frequently use the following simple observation .\nPROOF .\nWe will give the proof for P0 ( V ) ; the proof for P1 ( V ) is similar .\nFor i , j = 0 , 1 , let Pij be the payoff to V when U plays i , V plays 0 and W plays j .\nWe have P0 ( V ) = P00 ( 1 \u2212 u ) ( 1 \u2212 w ) + P10u ( 1 \u2212 w ) + P01 ( 1 \u2212 u ) w + P11uw .\nWe have to select the values of Pij so that P00 \u2212 P10 \u2212 P01 + P11 = A , \u2212 P00 + P10 = B , \u2212 P00 + P01 = C , P00 = D .\nIt is easy to see that the unique solution is given by P00 = D , P01 = C + D , P10 = B + D , P11 = A + B + C + D .\nThe input to all algorithms considered in this paper includes the payoff matrices for each player .\nWe assume that all elements of these matrices are integer .\nLet Pmax be the greatest absolute value of any element of any payoff matrix .\nThen the input consists of at most n2\u0394 +1 numbers , each of which can be represented using [ log Pmax ] bits .\n3 .\nNASH EQUILIBRIA THAT MAXIMIZE THE SOCIAL WELFARE : SOLUTIONS IN R \\ Q\nFrom the point of view of social welfare , the best Nash equilibrium is the one that maximizes the sum of the players ' expected payoffs .\nUnfortunately , it turns out that computing such a strategy profile exactly is not possible : in this section , we show that even if all players ' payoffs are integers , the strategy profile that maximizes the total payoff may have irrational coordinates ; moreover , it may involve algebraic numbers of an arbitrary degree .\n3.1 Warm-up : quadratic irrationalities\nWe start by providing an example of a graphical game on a path of length 3 with integer payoffs such that in the Nash equilibrium that maximizes the total payoff , one of the players has a strategy in R \\ Q .\nIn the next subsection , we will extend this example to algebraic numbers of arbitrary degree n ; to do so , we have to consider paths of length O ( n ) .\nTHEOREM 1 .\nThere exists an integer-payoff graphical game G on a 3-vertex path UV W such that , in any Nash equilibrium of G that maximizes social welfare , the strategy , u , of the player U and the total payoff , p , satisfy u , p E R \\ Q. PROOF .\nThe payoffs to the players in G are specified as follows .\nThe payoff to U is identically 0 , i.e. , P0 ( U ) = P1 ( U ) = 0 .\nUsing Claim 1 , we select the payoffs to V so that P0 ( V ) = \u2212 uw + 3w and P1 ( V ) = P0 ( V ) + w ( u + 2 ) \u2212 ( u + 1 ) , where u and w are the ( mixed ) strategies of U and W , respectively .\nIt follows that V is indifferent between playing 0 and 1 if and only if w = f ( u ) = u +1 u +2 .\nObserve that for any u E [ 0 , 1 ] we have f ( u ) E [ 0 , 1 ] .\nThe payoff to W is 0 if it selects the same action as V and 1 otherwise .\nany mixed strategy u no matter what V and W do .\nFurthermore , V is indifferent between 0 and 1 as long as w = f ( u ) , so it can play 1/2 .\nFinally , if V plays 0 and 1 with equal probability , W is indifferent between 0 and 1 , so it can play f ( u ) .\nConversely , suppose that v > 1/2 .\nThen W strictly prefers to play 0 , i.e. , w = 0 .\nThen for V we have P1 ( V ) = P0 ( V ) -- ( u + 1 ) , i.e. , P1 ( V ) < P0 ( V ) , which implies v = 0 , a contradiction .\nSimilarly , if v < 1/2 , player W prefers to play 1 , so we have w = 1 .\nHence , P1 ( V ) = P0 ( V ) + ( u + 2 ) -- ( u + 1 ) , i.e. , P1 ( V ) > P0 ( V ) , which implies v = 1 , a contradiction .\nFinally , if v = 1/2 , but w = ~ f ( u ) , player V is not indifferent between 0 and 1 , so he would deviate from playing 1/2 .\nThis completes the proof of Claim 2 .\nBy Claim 2 , the total payoff in any Nash equilibrium of this game is a function of u .\nMore specifically , the payoff to U is 0 , the payoff to V is -- uf ( u ) + 3f ( u ) , and the payoff to W is 1/2 .\nTherefore , the Nash equilibrium with the maximum total payoff corresponds to the value of u that maximizes\nThis concludes the proof of Theorem 1 .\n3.2 Strategies of arbitrary degree\nWe have shown that in the social welfare-maximizing Nash equilibrium , some players ' strategies can be quadratic irrationalities , and so can the total payoff .\nIn this subsection , we will extend this result to show that we can construct an integer-payoff graphical game on a path whose social welfare-maximizing Nash equilibrium involves arbitrary algebraic numbers in [ 0 , 1 ] .\nTHEOREM 2 .\nFor any degree-n algebraic number \u03b1 E [ 0 , 1 ] , there exists an integer payoff graphical game on a path of length O ( n ) such that , in all social welfare-maximizing Nash equilibria of this game , one of the players plays \u03b1 .\nPROOF .\nOur proof consists of two steps .\nFirst , we construct a rational expression R ( x ) and a segment [ x ~ , x ~ ~ ] such that x ~ , x ~ ~ E Q and \u03b1 is the only maximum of R ( x ) on [ x ~ , x ~ ~ ] .\nSecond , we construct a graphical game whose Nash equilibria can be parameterized by u E [ x ~ , x ~ ~ ] , so that at the equilibrium that corresponds to u the total payoff is R ( u ) and , moreover , some player 's strategy is u .\nIt follows that to achieve the payoff-maximizing Nash equilibrium , this player has to play \u03b1 .\nThe details follow .\nLEMMA 1 .\nGiven an algebraic number \u03b1 E [ 0 , 1 ] , deg ( \u03b1 ) = n , there exist K2 , ... , K2n +2 E Q and x ~ , x ~ ~ E ( 0 , 1 ) n Q such\nPROOF .\nLet P ( x ) be the minimal polynomial of \u03b1 , i.e. , a polynomial of degree n with rational coefficients whose leading coefficient is 1 such that P ( \u03b1 ) = 0 .\nLet A = { \u03b11 , ... , \u03b1n1 be the set of all roots of P ( x ) .\nConsider the polynomial Q1 ( x ) = -- P2 ( x ) .\nIt has the same roots as P ( x ) , and moreover , for any x E ~ A we have Q1 ( x ) < 0 .\nHence , A is the set of all maxima of Q1 ( x ) .\nNow , set\nx E [ 0 , 1 ] and R ( x ) = 0 if and only if Q1 ( x ) = 0 .\nHence , the set A is also the set of all maxima of R ( x ) on [ 0 , 1 ] .\nLet d = min { l\u03b1i -- \u03b1l l \u03b1i E A , \u03b1i = ~ \u03b11 , and set \u03b1 ~ = max { \u03b1 -- d/2 , 01 , \u03b1 ~ ~ = min { \u03b1 + d/2 , 11 .\nClearly , \u03b1 is the only zero ( and hence , the only maximum ) of R ( x ) on [ \u03b1 ~ , \u03b1 ~ ~ ] .\nLet x ~ and x ~ ~ be some rational numbers in ( \u03b1 ~ , \u03b1 ) and ( \u03b1 , \u03b1 ~ ~ ) , respectively ; note that by excluding the endpoints of the intervals we ensure that x ~ , x ~ ~ = ~ 0 , 1 .\nAs [ x ~ , x ~ ~ ] C [ \u03b1 ~ , \u03b1 ~ ~ ] , we have that \u03b1 is the only maximum of R ( x ) on [ x ~ , x ~ ~ ] .\nAs R ( x ) is a proper rational expression and all roots of its denominator are simple , by partial fraction decomposition theorem , R ( x ) can be represented as\nwhere K2 , ... , K2n +2 are rational numbers .\nConsider a graphical game on the path\nwhere k = 2n + 2 .\nIntuitively , we want each triple ( Ui \u2212 1 , Vi \u2212 1 , Ui ) to behave similarly to the players U , V , and W from the game described in the previous subsection .\nMore precisely , we define the payoffs to the players in the following way .\n.\nThe payoff to U \u2212 1 is 0 no matter what everyone else does .\n.\nThe expected payoff to V \u2212 1 is 0 if it plays 0 and u0 -- ( x ~ ~ -- x ~ ) u \u2212 1 -- x ~ if it plays 1 , where u0 and u \u2212 1 are the strategies of U0 and U \u2212 1 , respectively .\n.\nThe expected payoff to V0 is 0 if it plays 0 and u1 ( u0 + 1 ) -- u0 if it plays 1 , where u0 and u1 are the strategies of U0 and U1 , respectively .\n.\nFor each i = 1 , ... , k -- 1 , the expected payoff to Vi when it plays 0 is P0 ( Vi ) = Aiuiui +1 -- Aiui +1 , and the expected payoff to Vi when it plays 1 is P1 ( Vi ) = P0 ( Vi ) + ui +1 ( 2 -- ui ) -- 1 , where Ai = -- Ki +1 and ui +1 and ui are the strategies of Ui +1 and Ui , respectively .\n.\nFor each i = 0 , ... , k , the payoff to Ui does not depend on Vi and is 1 if Ui and Vi \u2212 1 select different actions and 0 otherwise .\nWe will now characterize the Nash equilibria of this game using a sequence of claims .\nCLAIM 3 .\nIn all Nash equilibria of this game V \u2212 1 plays 1/2 , and the strategies of u \u2212 1 and u0 satisfy u0 = ( x ~ ~ -- x ~ ) u \u2212 1 + x ~ .\nConsequently , in all Nash equilibria we have u0 E [ x ~ , x ~ ~ ] .\nThe function g ( u ) changes sign at -- 2 , -- 1 , and 3 .\nWe have g ( u ) < 0 for g > 3 , g ( u ) > 0 for u < -- 2 , so the extremum of g ( u ) that lies between 1 and 3 , i.e. , u = -- 2 + \\ / 5 , is a local maximum .\nWe conclude that the social welfare-maximizing Nash equilibrium for this game is given by the vector of strategies ( -- 2 + \\ / 5 , 1/2 , ( 5 -- \\ / 5 ) / 5 ) .\nThe respective total payoff is\nPROOF .\nThe proof is similar to that of Claim 2 .\nLet f ( u \u2212 1 ) = ( x ~ ~ \u2212 x ~ ) u \u2212 1 + x ~ .\nClearly , the player V \u2212 1 is indifferent between playing 0 and 1 if and only if u0 = f ( u \u2212 1 ) .\nSuppose that v \u2212 1 < 1/2 .\nThen U0 strictly prefers to play 1 , i.e. , u0 = 1 , so we have\nAs x ~ < x ~ ~ , x ~ > 0 , we have P1 ( V \u2212 1 ) < P0 ( V \u2212 1 ) , so V \u2212 1 prefers to play 0 , a contradiction .\nFinally , if V \u2212 1 plays 1/2 , but u0 = ~ f ( u \u2212 1 ) , player V \u2212 1 is not indifferent between 0 and 1 , so he would deviate from playing 1/2 .\nAlso , note that f ( 0 ) = x ~ , f ( 1 ) = x ~ ~ , and , moreover , f ( u \u2212 1 ) \u2208 [ x ~ , x ~ ~ ] if and only if u \u2212 1 \u2208 [ 0 , 1 ] .\nHence , in all Nash equilibria of this game we have u0 \u2208 [ x ~ , x ~ ~ ] .\nCLAIM 4 .\nIn all Nash equilibria of this game for each i = 0 , ... , k \u2212 1 , we have vi = 1/2 , and the strategies of the players Ui and Ui +1 satisfy ui +1 = fi ( ui ) , where f0 ( u ) = u / ( u + 1 ) and fi ( u ) = 1 / ( 2 \u2212 u ) for i > 0 .\nPROOF .\nThe proof of this claim is also similar to that of Claim 2 .\nWe use induction on i to prove that the statement of the claim is true and , additionally , ui = ~ 1 for i > 0 .\nFor the base case i = 0 , note that u0 = ~ 0 by the previous claim ( recall that x ~ , x ~ ~ are selected so that x ~ , x ~ ~ = ~ 0 , 1 ) and consider the triple ( U0 , V0 , U1 ) .\nLet v0 be the strategy of V0 .\nFirst , suppose that v0 > 1/2 .\nThen U1 strictly prefers to play 0 , i.e. , u1 = 0 .\nThen for V0 we have P1 ( V0 ) = P0 ( V0 ) \u2212 u0 .\nAs u0 = ~ 0 , we have P1 ( V0 ) < P0 ( V0 ) , which implies v1 = 0 , a contradiction .\nSimilarly , if v0 < 1/2 , player U1 prefers to play 1 , so we have u1 = 1 .\nHence , P1 ( V0 ) = P0 ( V0 ) + 1 .\nIt follows that P1 ( V0 ) > P0 ( V0 ) , which implies v0 = 1 , a contradiction .\nFinally , if v0 = 1/2 , but u1 = ~ u0 / ( u0 + 1 ) , player V0 is not indifferent between 0 and 1 , so he would deviate from playing 1/2 .\nMoreover , as u1 = u0 / ( u0 + 1 ) and u0 \u2208 [ 0 , 1 ] , we have u1 = ~ 1 .\nThe argument for the inductive step is similar .\nNamely , suppose that the statement is proved for all i ~ < i and consider the triple ( Ui , Vi , Ui +1 ) .\nLet vi be the strategy of Vi .\nFirst , suppose that vi > 1/2 .\nThen Ui +1 strictly prefers to play 0 , i.e. , ui +1 = 0 .\nThen for Vi we have P1 ( Vi ) = P0 ( Vi ) \u2212 1 , i.e. , P1 ( Vi ) < P0 ( Vi ) , which implies vi = 0 , a contradiction .\nSimilarly , if vi < 1/2 , player Ui +1 prefers to play 1 , so we have ui +1 = 1 .\nHence , P1 ( Vi ) = P0 ( Vi ) + 1 \u2212 ui .\nBy inductive hypothesis , we have ui < 1 .\nConsequently , P1 ( Vi ) > P0 ( Vi ) , which implies vi = 1 , a contradiction .\nFinally , if vi = 1/2 , but ui +1 = ~ 1 / ( 2 \u2212 ui ) , player Vi is not indifferent between 0 and 1 , so he would deviate from playing 1/2 .\nMoreover , as ui +1 = 1 / ( 2 \u2212 ui ) and ui < 1 , we have ui +1 < 1 .\nwhere u \u2212 1 \u2208 [ 0 , 1 ] , u0 = ( x ~ ~ \u2212 x ~ ) u \u2212 1 + x ~ , u1 = u0 / ( u0 + 1 ) , and ui +1 = 1 / ( 2 \u2212 ui ) for i \u2265 1 constitutes a Nash equilibrium .\nPROOF .\nFirst , the player U \u2212 1 's payoffs do not depend on other players ' actions , so he is free to play any strategy in [ 0 , 1 ] .\nAs long as u0 = ( x ~ ~ \u2212 x ~ ) u \u2212 1 + x ~ , player V \u2212 1 is indifferent between 0 and 1 , so he is content to play 1/2 ; a similar argument applies to players V0 , ... , Vk \u2212 1 .\nFinally , for each i = 0 , ... , k , the payoffs of player Ui only depend on the strategy of player Vi \u2212 1 .\nIn particular , as long as vi \u2212 1 = 1/2 , player Ui is indifferent between playing 0 and 1 , so he can play any mixed strategy ui \u2208 [ 0 , 1 ] .\nTo complete the proof , note that ( x ~ ~ \u2212 x ~ ) u \u2212 1 + x ~ \u2208 [ 0 , 1 ] for all u \u2212 1 \u2208 [ 0 , 1 ] , u0 / ( u0 + 1 ) \u2208 [ 0 , 1 ] for all u0 \u2208 [ 0 , 1 ] , and 1 / ( 2 \u2212 ui ) \u2208 [ 0 , 1 ] for all ui \u2208 [ 0 , 1 ] , so we have ui \u2208 [ 0 , 1 ] for all i = 0 , ... , k. Now , let us compute the total payoff under a strategy profile of the form given in Claim 5 .\nThe payoff to U \u2212 1 is 0 , and the expected payoff to each of the Ui , i = 0 , ... , k , is 1/2 .\nThe expected payoffs to V \u2212 1 and V0 are 0 .\nFinally , for any i = 1 , ... , k \u2212 1 , the expected payoff to Vi is Ti = Aiuiui +1 \u2212 Aiui +1 .\nIt follows that to find a Nash equilibrium with the highest total payoff , we have to maximize Pk \u2212 1\nWe would like to express Pk \u2212 1\nObserve that as u \u2212 1 varies from 0 to 1 , u varies from x ~ to x ~ ~ .\nTherefore , to maximize the total payoff , we have to choose u \u2208 [ x ~ , x ~ ~ ] so as to maximize\nBy construction , the only maximum of R ( u ) on [ x ~ , x ~ ~ ] is \u03b1 .\nIt follows that in the payoff-maximizing Nash equilibrium of our game U0 plays \u03b1 .\nFinally , note that the payoffs in our game are rational rather than integer .\nHowever , it is easy to see that we can multiply all payoffs to a player by their greatest common denominator without affecting his strategy .\nIn the resulting game , all payoffs are integer .\nThis concludes the proof of Theorem 2 .\n4 .\nAPPROXIMATING THE SOCIALLY OPTIMAL NASH EQUILIBRIUM\nWe have seen that the Nash equilibrium that maximizes the social welfare may involve strategies that are not in Q. Hence , in this section we focus on finding a Nash equilibrium that is almost optimal from the social welfare perspective .\nWe propose an algorithm that for any e > 0 finds a Nash equilibrium whose total payoff is within a from optimal .\nThe running time of this algorithm is polynomial in 1/e , n and | Pmax | ( recall that Pmax is the maximum absolute value of an entry of a payoff matrix ) .\nWhile the negative result of the previous section is for graphical games on paths , our algorithm applies to a wider range of scenarios .\nNamely , it runs in polynomial time on bounded-degree trees\nas long as the best response policy of each vertex , given its parent , can be represented as a union of a polynomial number of rectangles .\nNote that path graphs always satisfy this condition : in [ 9 ] we showed how to compute such a representation , given a graph with maximum degree 2 .\nConsequently , for path graphs the running time of our algorithm is guaranteed to be polynomial .\n( Note that [ 9 ] exhibits a family of graphical games on bounded-degree trees for which the best response policies of some of the vertices , given their parents , have exponential size , when represented as unions of rectangles . )\nDue to space restrictions , in this version of the paper we present the algorithm for the case where the graph underlying the graphical game is a path .\nWe then state our result for the general case ; the proof can be found in the full version of this paper [ 10 ] .\nSuppose that s is a strategy profile for a graphical game G .\nThat is , s assigns a mixed strategy to each vertex of G. let EPV ( s ) be the expected payoff of player V under s and let EP ( s ) =\nPROOF .\nLet { V1 , ... , Vn } be the set of all players .\nWe start by constructing the best response policies for all Vi , i = 1 , ... , n \u2212 1 .\nAs shown in [ 9 ] , this can be done in time O ( n3 ) .\nLet N > 5n be a parameter to be selected later , set \u03b4 = 1/N , and define X = { j\u03b4 | j = 0 , ... , N } .\nWe say that vj is an event point for a player Vi if it is a Vi-event point for B ( Vi , Vi-1 ) or B ( Vi +1 , Vi ) .\nFor each player Vi , consider a finite set of strategies Xi given by\nIt has been shown in [ 9 ] that for any i = 2 , ... , n , the best response policy B ( Vi , Vi-1 ) has at most 2n + 4 Vi-event points .\nAs we require N > 5n , we have | Xi | \u2264 2N ; assume without loss of generality that | Xi | = 2N .\nOrder the elements of Xi in increasing order as x1i = 0 < x2i < \u00b7 \u00b7 \u00b7 < x2N i .\nWe will refer to the strategies in Xi as discrete strategies of player Vi ; a strategy profile in which each player has a discrete strategy will be referred to as a discrete strategy profile .\nWe will now show that even we restrict each player Vi to strategies from Xi , the players can still achieve a Nash equilibrium , and moreover , the best such Nash equilibrium ( with respect to the social welfare ) has total payoff at least M ( G ) \u2212 ~ as long as N is large enough .\nLet s be a strategy profile that maximizes social welfare .\nThat is , let s = ( s1 , ... , sn ) where si is the mixed strategy of player Vi and EP ( s ) = M ( G ) .\nFor i = 1 , ... , n , let ti = max { xji | xji \u2264 si } .\nFirst , we will show that the strategy profile t = ( t1 , ... , tn ) is a Nash equilibrium for G. Fix any i , 1 < i \u2264 n , and let R = [ v1 , v2 ] \u00d7 [ u1 , u2 ] be the rectangle in B ( Vi , Vi-1 ) that contains ( si , si-1 ) .\nAs v1 is a Vi-event point of B ( Vi , Vi-1 ) , we have v1 \u2264 ti , so the point ( ti , si-1 ) is inside R. Similarly , the point u1 is a Vi-1-event point of B ( Vi , Vi-1 ) , so we have u1 \u2264 ti-1 , and therefore the point ( ti , ti-1 ) is inside R .\nThis means that for any i , 1 < i \u2264 n , we have ti-1 \u2208 pbrVI \u2212 1 ( ti ) , which implies that t = ( t1 , ... , tn ) is a Nash equilibrium for G. Now , let us estimate the expected loss in social welfare caused by playing t instead of s. LEMMA 3 .\nFor any pair of strategy profiles t , s such that | ti \u2212 si | \u2264 \u03b4 we have | EPVI ( s ) \u2212 EPVI ( t ) | \u2264 24Pmax\u03b4 for any i = 1 , ... , n. PROOF .\nLet Piklm be the payoff of the player Vi , when he plays k , Vi-1 plays l , and Vi +1 plays m. Fix i = 1 , ... , n and for k , l , m \u2208 { 0 , 1 } , set\nWe will now show that for any k , l , m \u2208 { 0 , 1 } we have | tklm \u2212\nObserve that if k = 0 then x \u2212 x ' = ( 1 \u2212 ti-1 ) \u2212 ( 1 \u2212 si-1 ) , and if k = 1 then x \u2212 x ' = ti-1 \u2212 si-1 , so | x \u2212 x ' | \u2264 \u03b4 .\nA similar argument shows | y \u2212 y ' | \u2264 \u03b4 , | z \u2212 z ' | \u2264 \u03b4 .\nAlso , we have x , x ' , y , y ' , z , z ' \u2208 [ 0 , 1 ] .\nHence , | tklm \u2212 sklm | = | xyz \u2212 x ` y' z ' | = | xyz \u2212 x ` yz + x ` yz \u2212 x ` y' z + x ` y' z \u2212 x ` y' z ' | \u2264 | x \u2212 x ' | yz + | y \u2212 y ' | x ' z + | z \u2212 z ' | x ` y ' \u2264 3\u03b4 .\nLemma 3 implies Pni = 1 | EPVI ( s ) \u2212 EPVI ( t ) | \u2264 24nPmax\u03b4 , so by choosing \u03b4 < ~ / ( 24nPmax ) , or , equivalently , setting N > 24nPmax / ~ , we can ensure that the total expected payoff for the strategy profile t is within ~ from optimal .\nWe will now show that we can find the best discrete Nash equilibrium ( with respect to the social welfare ) using dynamic programming .\nAs t is a discrete strategy profile , this means that the strategy profile found by our algorithm will be at least as good as t.\nDefine ml , k\ni to be the maximum total payoff that V1 , ... , Vi-1 can achieve if each Vj , j \u2264 i , chooses a strategy from Xj , for each j < i the strategy of Vj is a potential best response to the strategy of Vj +1 , and , moreover , Vi-1 plays xli-1 , Vi plays xki .\nIf there is no way to choose the strategies for V1 , ... , Vi-1 to satisfy these conditions , we set ml , ki = \u2212 \u221e .\nThe values ml , k i , i = 1 , ... , n ; k , l = 1 , ... , N , can be computed inductively , as follows .\nWe have ml , k 1 = 0 for k , l = 1 , ... , N. Now , suppose that we have already computed ml , k j for all j < i ; k , l = 1 , ... , N. To compute mi , we first check if ( xk k , l i , xli-1 ) \u2208 B ( Vi , Vi-1 ) .\nIf this is not the case , we have ml , k i = \u2212 \u221e .\nOtherwise , consider the set Y = Xi-2 \u2229 pbrVI \u2212 2 ( xli-1 ) , i.e. , the set of all discrete strategies of Vi-2 that are potential best responses to xli-1 .\nThe proof of Theorem 1 in [ 9 ] implies that the set pbrVI \u2212 2 ( xli-1 ) is non-empty : the player Vi-2 has a potential best response to any strategy of Vi-1 , in particular , xli-1 .\nBy construction of the set Xi-2 , this implies that Y is not empty .\nFor each xji-2 \u2208 Y , let pjlk be the payoff that Vi-1 receives when Vi-2 plays xji-2 , Vi-1 plays xli-1 , and Vi plays xki .\nClearly , pjlk can be computed in constant time .\nThen we have ml , ki = max { mj , l\nFinally , suppose that we have computed ml , k n for l , k = 1 , ... , N .\nWe still need to take into account the payoff of player Vn .\nHence ,\nwe consider all pairs ( xk n , xl n_1 ) that satisfy xln_1 E pbrVn \u2212 1 ( xkn ) , and pick the one that maximizes the sum of mk , l n and the payoff of Vn when he plays xknand Vn_1 plays xl n_1 .\nThis results in the maximum total payoff the players can achieve in a Nash equilibrium using discrete strategies ; the actual strategy profile that produces this payoff can be reconstructed using standard dynamic programming techniques .\nIt is easy to see that each ml , k i can be computed in time O ( N ) , i.e. , all of them can be computed in time O ( nN3 ) .\nRecall that we have to select N > ( 24nPmax ) / E to ensure that the strategy profile we output has total payoff that is within E from optimal .\nWe conclude that we can compute an E-approximation to the best Nash equilibrium in time O ( n4P3max/E3 ) .\nThis completes the proof of Theorem 3 .\nTo state our result for the general case ( i.e. , when the underlying graph is a bounded-degree tree rather than a path ) , we need additional notation .\nIf G has n players , let q ( n ) be an upper bound on the number of event points in the representation of any best response policy .\nThat is , we assume that for any vertex U with parent V , B ( V , U ) has at most q ( n ) event points .\nWe will be interested in the situation in which q ( n ) is polynomial in n. THEOREM 4 .\nLet G be an n-player graphical game on a tree in which each node has at most \u0394 children .\nSuppose we are given a set of best-response policies for G in which each best-response policy B ( V , U ) is represented by a set of rectangles with at most q ( n ) event points .\nFor any E > 0 , there is an algorithm that constructs a Nash equilibrium s ' for G that satisfies EP ( s ' ) > M ( G ) \u2212 E .\nThe running time of the algorithm is polynomial in n , Pmax and E_1 provided that the tree has bounded degree ( that is , \u0394 = O ( 1 ) ) and q ( n ) is a polynomial in n .\nIn particular , if\nand \u0394 > 1 then the running time is O ( n\u0394 ( 2N ) \u0394 .\nFor the proof of this theorem , see [ 10 ] .\n4.1 A polynomial-time algorithm for multiplicative approximation\nThe running time of our algorithm is pseudopolynomial rather than polynomial , because it includes a factor which is polynomial in Pmax , the maximum ( in absolute value ) entry in any payoff matrix .\nIf we are interested in multiplicative approximation rather than additive one , this can be improved to polynomial .\nFirst , note that we can not expect a multiplicative approximation for all inputs .\nThat is , we can not hope to have an algorithm that computes a Nash equilibrium with total payoff at least ( 1 \u2212 E ) M ( G ) .\nIf we had such an algorithm , then for graphical games G with M ( G ) = 0 , the algorithm would be required to output the optimal solution .\nTo show that this is infeasible , observe that we can use the techniques of Section 3.2 to construct two integercoefficient graphical games on paths of length O ( n ) such that for some X E R the maximal total payoff in the first game is X , the maximal total payoff in the second game is \u2212 X , and for both games , the strategy profiles that achieve the maximal total payoffs involve algebraic numbers of degree n. By combining the two games so that the first vertex of the second game becomes connected to the last vertex of the first game , but the payoffs of all players do not change , we obtain a graphical game in which the best Nash equilibrium has total payoff 0 , yet the strategies that lead to this payoff have high algebraic complexity .\nHowever , we can achieve a multiplicative approximation when all entries of the payoff matrices are positive and the ratio between any two entries is polynomially bounded .\nRecall that we assume that all payoffs are integer , and let Pmin > 0 be the smallest entry of any payoff matrix .\nIn this case , for any strategy profile the payoff to player i is at least Pmin , so the total payoff in the social-welfare maximizing Nash equilibrium s satisfies M ( G ) > nPmin .\nMoreover , Lemma 3 implies that by choosing \u03b4 < E / ( 24Pmax/Pmin ) , we can ensure that the Nash equilibrium t produced by our algorithm satisfies\ni.e. , for this value of \u03b4 we have Pn i = 1 EPVi ( t ) > ( 1 \u2212 E ) M ( G ) .\nRecall that the running time of our algorithm is O ( nN3 ) , where N has to be selected to satisfy N > 5n , N = 1 / \u03b4 .\nIt follows that if Pmin > 0 , Pmax/Pmin = poly ( n ) , we can choose N so that our algorithm provides a multiplicative approximation guarantee and runs in time polynomial in n and 1/E .\n5 .\nBOUNDED PAYOFF NASH EQUILIBRIA\nAnother natural way to define what is a `` good '' Nash equilibrium is to require that each player 's expected payoff exceeds a certain threshold .\nThese thresholds do not have to be the same for all players .\nIn this case , in addition to the payoff matrices of the n players , we are given n numbers T1 , ... , Tn , and our goal is to find a Nash equilibrium in which the payoff of player i is at least Ti , or report that no such Nash equilibrium exists .\nIt turns out that we can design an FPTAS for this problem using the same techniques as in the previous section .\nTHEOREM 5 .\nGiven a graphical game G on an n-vertex path and n rational numbers T1 , ... , Tn , suppose that there exists a strategy profile s such that s is a Nash equilibrium for G and EPVi ( s ) > Ti for i = 1 , ... , n .\nThen for any E > 0 we can find in time O ( max { nP3max/E3 , n4/E3 } ) a strategy profile s ' such\nPROOF .\nThe proof is similar to that of Theorem 3 .\nFirst , we construct the best response policies for all players , choose N > 5n , and construct the sets Xi , i = 1 , ... , n , as described in the proof of Theorem 3 .\nConsider a strategy profile s such that s is a Nash equilibrium for G and EPVi ( s ) > Ti for i = 1 , ... , n .\nWe construct a strategy profile ti = max { xji | xji < si } and use the same argument as in the proof of Theorem 3 to show that t is a Nash equilibrium for G. By Lemma 3 , we have | EPVi ( s ) \u2212 EPVi ( t ) | < 24Pmax\u03b4 , so choosing \u03b4 < E / ( 24Pmax ) , or , equivalently , N > max { 5n , 24Pmax/E } , we can ensure EPVi ( t ) > Ti \u2212 E for i = 1 , ... , n. Now , we will use dynamic programming to find a discrete Nash equilibrium that satisfies EPVi ( t ) > Ti \u2212 E for i = 1 , ... , n .\nAs t is a discrete strategy profile , our algorithm will succeed whenever there is a strategy profile s with EPVi ( s ) > Ti \u2212 E for i = 1 , ... , n. Let zl , k i = 1 if there is a discrete strategy profile such that for any j < i the strategy of the player Vj is a potential best response to the strategy of Vj +1 , the expected payoff of Vj is at least Tj \u2212 E , and , moreover , Vi_1 plays xli_1 , Vi plays xki .\nOtherwise , let zl , k i = 0 .\nWe can compute zl , k i , i = 1 , ... , n ; k , l = 1 , ... , N inductively , as follows .\nWe have zl , k 1 = 1 for k , l = 1 , ... , N. Now , suppose that we have already computed zl , k j for all j < i ; k , l = 1 , ... , N. To compute zk , l i , we first check if ( xki , xli_1 ) E B ( Vi , Vi_1 ) .\nIf this\nis not the case , clearly , zk , l i = 0 .\nOtherwise , consider the set Y = Xi_2 flpbrV , \u2212 2 ( xli_1 ) , i.e. , the set of all discrete strategies of Vi_2 that are potential best responses to xli_1 .\nIt has been shown in the proof of Theorem 3 that Y = ~ 0 .\nFor each xji_2 E Y , let pjlk be the payoff that Vi_1 receives when Vi_2 plays xji_2 , Vi_1 plays xli_1 , and Vi plays xki .\nClearly , pjlk can be computed in constant time .\nIf there exists an xji_2 E Y such that zj , l\nHaving computed zl , kn , l , k = 1 , ... , N , we check if zl , k n = 1 for some pair ( l , k ) .\nif such a pair of indices exists , we instruct Vn to play xkn and use dynamic programming techniques ( or , equivalently , the downstream pass of the algorithm of [ 13 ] ) to find a Nash equilibrium s ' that satisfies EPV , ( s ' ) > Ti \u2212 E for i = 1 , ... , n ( recall that Vn is a dummy player , i.e. , we assume Tn = 0 , EPn ( s ' ) = 0 for any choice of s ' ) .\nIf zl , k n = 0 for all l , k = 1 , ... , N , there is no discrete Nash equilibrium s ' that satisfies EPV , ( s ' ) > Ti \u2212 E for i = 1 , ... , n and hence no Nash equilibrium s ( not necessarily discrete ) such that EPV , ( s ) > Ti for i = 1 , ... , n .\nThe running time analysis is similar to that for Theorem 3 ; we conclude that the running time of our algorithm is O ( nN3 ) = O ( max { nP3max/E3 , n4/E3 } ) .\nREMARK 1 .\nTheorem 5 can be extended to trees of bounded degree in the same way as Theorem 4 .\n5.1 Exact Computation\nAnother approach to finding Nash equilibria with bounded payoffs is based on inductively computing the subsets of the best response policies of all players so as to exclude the points that do not provide sufficient payoffs to some of the players .\nFormally , we say that a strategy v of the player V is a potential best response to a strategy w of its parent W with respect to a threshold vector T = ( T1 , ... , Tn ) , ( denoted by v E pbrV ( w , T ) ) if there is an equilibrium in the instance G ( V , W ) , W = w in which V plays mixed strategy v and the payoff to any player Vi downstream of V ( including V ) is at least Ti .\nThe best response policy for V with respect to a threshold vector T is defined as B ( W , V , T ) = { ( w , v ) | v E pbrV ( w , T ) , w E [ 0 , 1 ] } .\nIt is easy to see that if any of the sets B ( Vj , Vj_1 , T ) , j = 1 , ... , n , is empty , it means that it is impossible to provide all players with expected payoffs prescribed by T. Otherwise , one can apply the downstream pass of the original algorithm of [ 13 ] to find a Nash equilibrium .\nAs we assume that Vn is a dummy vertex whose payoff is identically 0 , the Nash equilibrium with these payoffs exists as long as Tn < 0 and B ( Vn , Vn_1 , T ) is not empty .\nUsing the techniques developed in [ 9 ] , it is not hard to show that for any j = 1 , ... , n , the set B ( Vj , Vj_1 , T ) consists of a finite number of rectangles , and one can compute B ( Vj +1 , Vj , T ) given B ( Vj , Vj_1 , T ) .\nThe advantage of this approach is that it allows us to represent all Nash equilibria that provide required payoffs to the players .\nHowever , it is not likely to be practical , since it turns out that the rectangles that appear in the representation of B ( Vj , Vj_1 , T ) may have irrational coordinates .\nCLAIM 6 .\nThere exists a graphical game G on a 3-vertex path UV W and a vector T = ( T1 , T2 , T3 ) such that B ( V , W , T ) can not be represented as a union of a finite number of rectangles with rational coordinates .\nPROOF .\nWe define the payoffs to the players in G as follows .\nThe payoff to U is identically 0 , i.e. , P0 ( U ) = P1 ( U ) = 0 .\nUsing Claim 1 , we select the payoffs to V so that P0 ( V ) = uw , P1 ( V ) = P0 ( V ) + w \u2212 .8 u \u2212 .1 , where u and w are the ( mixed ) strategies of U and W , respectively .\nIt follows that V is indifferent between playing 0 and 1 if and only if w = f ( u ) = .8 u + .1 ; observe that for any u E [ 0 , 1 ] we have f ( u ) E [ 0 , 1 ] .\nIt is not hard to see that we have B ( W , V ) = [ 0 , .1 ] x { 0 } u [ .1 , .9 ] x [ 0 , 1 ] u [ .9 , 1 ] x { 1 } .\nThe payoffs to W are not important for our construction ; for example , set P0 ( W ) = P0 ( W ) = 0 .\nNow , set T = ( 0 , 1/8 , 0 ) , i.e. , we are interested in Nash equilibria in which V 's expected payoff is at least 1/8 .\nSuppose w E [ 0 , 1 ] .\nThe player V can play a mixed strategy v when W is playing w as long as U plays u = f _ 1 ( w ) = 5w/4 \u2212 1/8 ( to ensure that V is indifferent between 0 and 1 ) and P0 ( V ) = P1 ( V ) = uw = w ( 5w/4 \u2212 1/8 ) > 1/8 .\nThe latter condition is satisfied if have .1 < ( 1 + \\ / 41 ) / 20 < .9 .\nFor any other value of w , any w < ( 1 \u2212 \\ / 41 ) / 20 < 0 or w > ( 1 + \\ / 41 ) / 20 .\nNote that we strategy of U either makes V prefer one of the pure strategies or does not provide it with a sufficient expected payoff .\nThere are also some values of w for which V can play a pure strategy ( 0 or 1 ) as a potential best response to W and guarantee itself an expected payoff of at least 1/8 ; it can be shown that these values of w form a finite number of segments in [ 0 , 1 ] .\nWe conclude that any repremust contain a rectangle of the form [ ( 1 + \\ / 41 ) / 20 , w '' ] x [ v ' , v '' ] sentation of B ( W , V , T ) as a union of a finite number of rectangles for some w '' , v ' , v '' E [ 0 , 1 ] .\nOn the other hand , it can be shown that for any integer payoff matrices and threshold vectors and any j = 1 , ... , n \u2212 1 , the sets B ( Vj +1 , Vj , T ) contain no rectangles of the form [ u ' , u '' ] x { v } or { v } x [ w ' , w '' ] , where v E R \\ Q .\nThis means that if B ( Vn , Vn_1 , T ) is non-empty , i.e. , there is a Nash equilibrium with payoffs prescribed by T , then the downstream pass of the algorithm of [ 13 ] can always pick a strategy profile that forms a Nash equilibrium , provides a payoff of at least Ti to the player Vi , and has no irrational coordinates .\nHence , unlike in the case of the Nash equilibrium that maximizes the social welfare , working with irrational numbers is not necessary , and the fact that the algorithm discussed in this section has to do so can be seen as an argument against using this approach .\n6 .\nOTHER CRITERIA FOR SELECTING A NASH EQUILIBRIUM\nIn this section , we consider several other criteria that can be useful in selecting a Nash equilibrium .\n6.1 Combining welfare maximization with bounds on payoffs\nIn many real life scenarios , we want to maximize the social welfare subject to certain restrictions on the payoffs to individual players .\nFor example , we may want to ensure that no player gets a negative expected payoff , or that the expected payoff to player i is at least Pi max \u2212 \u03be , where Pi max is the maximum entry of i 's payoff matrix and \u03be is a fixed parameter .\nFormally , given a graphical game G and a vector T1 , ... , Tn , let S be the set of all Nash equilibria s of G that satisfy Ti < EPV , ( s ) for i = 1 , ... , n , and let s\u02c6 = argmaxsEs EP ( s ) .\nIf the set S is non-empty , we can find a Nash equilibrium\u02c6s ' that is E-close to satisfying the payoff bounds and is within E from s\u02c6 with respect to the total payoff by combining the algorithms of Section 4 and Section 5 .\nNamely , for a given E > 0 , choose \u03b4 as in the proof of Theorem 3 , and let Xi be the set of all discrete strategies of player Vi ( for a\nformal definition , see the proof of Theorem 3 ) .\nCombining the proofs of Theorem 3 and Theorem 5 , we can see that the strategy profile t\u02c6 given by \u02c6ti = max { xji I xji < \u02c6si } satisfies EPVi ( \u02c6t ) > Ti -- E , IEP ( \u02c6s ) -- EP ( \u02c6t ) I < E. Define \u02c6ml , k i to be the maximum total payoff that V1 , ... , Vi \u2212 1 can achieve if each Vj , j < i , chooses a strategy from Xj , for each j < i the strategy of Vj is a potential best response to the strategy of Vj +1 and the payoff to player Vj is at least Tj -- E , and , moreover , Vi \u2212 1 plays xli \u2212 1 , Vi plays xki .\nIf there is no way to choose the strategies for V1 , ... , Vi \u2212 1 to satisfy these conditions , we set ml , k i = -- oo .\nThe \u02c6ml , k i can be computed by dynamic programming similarly to the ml , k i and zl , k i in the proofs of Theorems 3 and 5 .\nFinally , as in the proof of Theorem 3 , we use ml , k n to select the best discrete Nash equilibrium subject to the payoff constraints .\nEven more generally , we may want to maximize the total payoff to a subset of players ( who are assumed to be able to redistribute the profits fairly among themselves ) while guaranteeing certain expected payoffs to ( a subset of ) the other players .\nThis problem can be handled similarly .\n6.2 A minimax approach\nA more egalitarian measure of the quality of a Nash equilibrium is the minimal expected payoff to a player .\nThe optimal solution with respect to this measure is a Nash equilibrium in which the minimal expected payoff to a player is maximal .\nTo find an approximation to such a Nash equilibrium , we can combine the algorithm of Section 5 with binary search on the space of potential lower bounds .\nNote that the expected payoff to any player Vi given a strategy s always satisfies -- Pmax < EPVi ( s ) < Pmax .\nFor a fixed E > 0 , we start by setting T ~ = -- Pmax , T ~ ~ = Pmax , T \u2217 = ( T ~ + T ~ ~ ) / 2 .\nWe then run the algorithm of Section 5 with T1 = \u2022 \u2022 \u2022 = Tn = T \u2217 .\nIf the algorithm succeeds in finding a Nash equilibrium s ~ that satisfies EPVi ( s ~ ) > T \u2217 -- E for all i = 1 , ... , n , we set T ~ = T \u2217 , T \u2217 = ( T ~ + T ~ ~ ) / 2 ; otherwise , we set T ~ ~ = T \u2217 , T \u2217 = ( T ~ + T ~ ~ ) / 2 and loop .\nWe repeat this process until IT ~ -- T ~ ~ I < E .\nIt is not hard to check that for any p E R , if there is a Nash equilibrium s such that mini = 1 , ... , n EPVi ( s ) > p , then our algorithm outputs a Nash equilibrium s ~ that satisfies mini = 1 , ... , n EPVi ( s ) > p -- 2E .\nThe running time of our algorithm is O ( max { nP3 max log E \u2212 1/E3 , n4 log E \u2212 1/E3 } ) .\n6.3 Equalizing the payoffs\nWhen the players ' payoff matrices are not very different , it is reasonable to demand that the expected payoffs to the players do not differ by much either .\nWe will now show that Nash equilibria in this category can be approximated in polynomial time as well .\nIndeed , observe that the algorithm of Section 5 can be easily modified to deal with upper bounds on individual payoffs rather than lower bounds .\nMoreover , we can efficiently compute an approximation to a Nash equilibrium that satisfies both the upper bound and the lower bound for each player .\nMore precisely , suppose that we are given a graphical game G , 2n rational numbers T1 , ... , Tn , T ~ 1 , ... , Tn ~ and E > 0 .\nThen if there exists a strategy profile s such that s is a Nash equilibrium for G and Ti < EPVi ( s ) < Ti ~ for i = 1 , ... , n , we can find a strategy profile s ~ such that s ~ is a Nash equilibrium for G and Ti -- E < EPVi ( s ~ ) < Ti ~ + E for i = 1 , ... , n .\nThe modified algorithm also runs in time O ( max { nP3max/E3 , [ 4 ] n4/E3 } ) .\nThis observation allows us to approximate Nash equilibria in which all players ' expected payoffs differ by at most \u03be for any fixed \u03be > 0 .\nGiven an E > 0 , we set T1 = \u2022 \u2022 \u2022 = Tn = -- Pmax , T1 ~ = \u2022 \u2022 \u2022 = Tn ~ = -- Pmax + \u03be + E , and run the modified version of the algorithm of Section 5 .\nIf it fails to find a solution , we increment all Ti , Ti ~ by E and loop .\nWe continue until the algorithm finds a solution , or Ti > Pmax .\nSuppose that there exists a Nash equilibrium s that satisfies IEPVi ( s ) -- EPVj ( s ) I < \u03be for all i , j = 1 , ... , n. Set r = mini = 1 , ... , n EPVi ( s ) ; we have r < EPVi ( s ) < r + \u03be for all i = 1 , ... , n .\nThere exists a k > 0 such that -- Pmax + ( k -- 1 ) E < r < -- Pmax + kE .\nDuring the kth step of the algorithm , we set T1 = \u2022 \u2022 \u2022 = Tn = -- Pmax + ( k -- 1 ) E , i.e. , we have r -- E < Ti < r , r + \u03be < Ti ~ < r + \u03be + E .\nThat is , the Nash equilibrium s satisfies Ti < r < EPVi ( s ) < r + \u03be < Ti ~ , which means that when Ti is set to -- Pmax + ( k -- 1 ) E , our algorithm is guaranteed to output a Nash equilibrium t that satisfies r -- 2E < Ti -- E < EPVi ( t ) < Ti ~ + E < r + \u03be + 2E .\nWe conclude that whenever such a Nash equilibrium s exists , our algorithm outputs a Nash equilibrium t that satisfies IEPVi ( t ) -- EPVj ( t ) I < \u03be + 4E for all i , j = 1 , ... , n .\nThe running time of this algorithm is O ( max { nP3max/E4 , n4/E4 } ) .\nNote also that we can find the smallest \u03be for which such a Nash equilibrium exists by combining this algorithm with binary search over the space \u03be = [ 0 , 2Pmax ] .\nThis identifies an approximation to the `` fairest '' Nash equilibrium , i.e. , one in which the players ' expected payoffs differ by the smallest possible amount .\nFinally , note that all results in this section can be extended to bounded-degree trees .\n7 .\nCONCLUSIONS\nWe have studied the problem of equilibrium selection in graphical games on bounded-degree trees .\nWe considered several criteria for selecting a Nash equilibrium , such as maximizing the social welfare , ensuring a lower bound on the expected payoff of each player , etc. .\nFirst , we focused on the algebraic complexity of a social welfare-maximizing Nash equilibrium , and proved strong negative results for that problem .\nNamely , we showed that even for graphical games on paths , any algebraic number \u03b1 E [ 0 , 1 ] may be the only strategy available to some player in all social welfaremaximizing Nash equilibria .\nThis is in sharp contrast with the fact that graphical games on trees always possess a Nash equilibrium in which all players ' strategies are rational numbers .\nWe then provided approximation algorithms for selecting Nash equilibria with special properties .\nWhile the problem of finding approximate Nash equilibria for various classes of games has received a lot of attention in recent years , most of the existing work aims to find E-Nash equilibria that satisfy ( or are E-close to satisfying ) certain properties .\nOur approach is different in that we insist on outputting an exact Nash equilibrium , which is E-close to satisfying a given requirement .\nAs argued in the introduction , there are several reasons to prefer a solution that constitutes an exact Nash equilibrium .\nOur algorithms are fully polynomial time approximation schemes , i.e. , their running time is polynomial in the inverse of the approximation parameter E , though they may be pseudopolynomial with respect to the input size .\nUnder mild restrictions on the inputs , they can be modified to be truly polynomial .\nThis is the strongest positive result one can derive for a problem whose exact solutions may be hard to represent , as is the case for many of the problems considered here .\nWhile we prove our results for games on a path , they can be generalized to any tree for which the best response policies have compact representations as unions of rectangles .\nIn the full version of the paper we describe our algorithms for the general case .\nFurther work in this vein could include extensions to the kinds of guarantees sought for Nash equilibria , such as guaranteeing total payoffs for subsets of players , selecting equilibria in which some players are receiving significantly higher payoffs than their peers , etc. .\nAt the moment however , it is perhaps more important to inves\ntigate whether Nash equilibria of graphical games can be computed in a decentralized manner , in contrast to the algorithms we have introduced here .\nIt is natural to ask if our results or those of [ 9 ] can be generalized to games with three or more actions .\nHowever , it seems that this will make the analysis significantly more difficult .\nIn particular , note that one can view the bounded payoff games as a very limited special case of games with three actions per player .\nNamely , given a two-action game with payoff bounds , consider a game in which each player Vi has a third action that guarantees him a payoff of Ti no matter what everyone else does .\nThen checking if there is a Nash equilibrium in which none of the players assigns a nonzero probability to his third action is equivalent to checking if there exists a Nash equilibrium that satisfies the payoff bounds in the original game , and Section 5.1 shows that finding an exact solution to this problem requires new ideas .\nAlternatively it may be interesting to look for similar results in the context of correlated equilibria ( CE ) , especially since the best CE may have higher value ( total expected payoff ) than the best NE .\nThe ratio between these values is called the mediation value in [ 1 ] .\nIt is known from [ 1 ] that the mediation value of 2-player , 2-action games with non-negative payoffs is at most 43 , and they exhibit a 3-player game for which it is infinite .\nFurthermore , a 2-player , 3action example from [ 1 ] also has infinite mediation value ."} {"id": "C-30", "title": "", "abstract": "", "keyphrases": ["overlai mesh", "data dissemin", "overlai network", "ip multicast", "multipoint commun", "high-bandwidth data distribut", "larg-file transfer", "real-time multimedia stream", "bullet", "bandwidth probe", "peer-to-peer", "ransub", "content deliveri", "tfrc", "bandwidth", "overlai"], "prmu": [], "lvl-1": "Bullet: High Bandwidth Data Dissemination Using an Overlay Mesh Dejan Kosti\u00b4c, Adolfo Rodriguez, Jeannie Albrecht, and Amin Vahdat\u2217 Department of Computer Science Duke University {dkostic,razor,albrecht,vahdat}@cs.\nduke.edu ABSTRACT In recent years, overlay networks have become an effective alternative to IP multicast for efficient point to multipoint communication across the Internet.\nTypically, nodes self-organize with the goal of forming an efficient overlay tree, one that meets performance targets without placing undue burden on the underlying network.\nIn this paper, we target high-bandwidth data distribution from a single source to a large number of receivers.\nApplications include large-file transfers and real-time multimedia streaming.\nFor these applications, we argue that an overlay mesh, rather than a tree, can deliver fundamentally higher bandwidth and reliability relative to typical tree structures.\nThis paper presents Bullet, a scalable and distributed algorithm that enables nodes spread across the Internet to self-organize into a high bandwidth overlay mesh.\nWe construct Bullet around the insight that data should be distributed in a disjoint manner to strategic points in the network.\nIndividual Bullet receivers are then responsible for locating and retrieving the data from multiple points in parallel.\nKey contributions of this work include: i) an algorithm that sends data to different points in the overlay such that any data object is equally likely to appear at any node, ii) a scalable and decentralized algorithm that allows nodes to locate and recover missing data items, and iii) a complete implementation and evaluation of Bullet running across the Internet and in a large-scale emulation environment reveals up to a factor two bandwidth improvements under a variety of circumstances.\nIn addition, we find that, relative to tree-based solutions, Bullet reduces the need to perform expensive bandwidth probing.\nIn a tree, it is critical that a node``s parent delivers a high rate of application data to each child.\nIn Bullet however, nodes simultaneously receive data from multiple sources in parallel, making it less important to locate any single source capable of sustaining a high transmission rate.\nCategories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems; H.4.3 [Information Systems Applications]: Communications Applications General Terms Experimentation, Management, Performance 1.\nINTRODUCTION In this paper, we consider the following general problem.\nGiven a sender and a large set of interested receivers spread across the Internet, how can we maximize the amount of bandwidth delivered to receivers?\nOur problem domain includes software or video distribution and real-time multimedia streaming.\nTraditionally, native IP multicast has been the preferred method for delivering content to a set of receivers in a scalable fashion.\nHowever, a number of considerations, including scale, reliability, and congestion control, have limited the wide-scale deployment of IP multicast.\nEven if all these problems were to be addressed, IP multicast does not consider bandwidth when constructing its distribution tree.\nMore recently, overlays have emerged as a promising alternative to multicast for network-efficient point to multipoint data delivery.\nTypical overlay structures attempt to mimic the structure of multicast routing trees.\nIn network-layer multicast however, interior nodes consist of high speed routers with limited processing power and extensibility.\nOverlays, on the other hand, use programmable (and hence extensible) end hosts as interior nodes in the overlay tree, with these hosts acting as repeaters to multiple children down the tree.\nOverlays have shown tremendous promise for multicast-style applications.\nHowever, we argue that a tree structure has fundamental limitations both for high bandwidth multicast and for high reliability.\nOne difficulty with trees is that bandwidth is guaranteed to be monotonically decreasing moving down the tree.\nAny loss high up the tree will reduce the bandwidth available to receivers lower down the tree.\nA number of techniques have been proposed to recover from losses and hence improve the available bandwidth in an overlay tree [2, 6].\nHowever, fundamentally, the bandwidth available to any host is limited by the bandwidth available from that node``s single parent in the tree.\nThus, our work operates on the premise that the model for high-bandwidth multicast data dissemination should be re-examined.\nRather than sending identical copies of the same data stream to all nodes in a tree and designing a scalable mechanism for recovering from loss, we propose that participants in a multicast overlay cooperate to strategically 282 transmit disjoint data sets to various points in the network.\nHere, the sender splits data into sequential blocks.\nBlocks are further subdivided into individual objects which are in turn transmitted to different points in the network.\nNodes still receive a set of objects from their parents, but they are then responsible for locating peers that hold missing data objects.\nWe use a distributed algorithm that aims to make the availability of data items uniformly spread across all overlay participants.\nIn this way, we avoid the problem of locating the last object, which may only be available at a few nodes.\nOne hypothesis of this work is that, relative to a tree, this model will result in higher bandwidth-leveraging the bandwidth from simultaneous parallel downloads from multiple sources rather than a single parent-and higher reliability-retrieving data from multiple peers reduces the potential damage from a single node failure.\nTo illustrate Bullet``s behavior, consider a simple three node overlay with a root R and two children A and B. R has 1 Mbps of available (TCP-friendly) bandwidth to each of A and B. However, there is also 1 Mbps of available bandwidth between A and B.\nIn this example, Bullet would transmit a disjoint set of data at 1 Mbps to each of A and B.\nA and B would then each independently discover the availability of disjoint data at the remote peer and begin streaming data to one another, effectively achieving a retrieval rate of 2 Mbps.\nOn the other hand, any overlay tree is restricted to delivering at most 1 Mbps even with a scalable technique for recovering lost data.\nAny solution for achieving the above model must maintain a number of properties.\nFirst, it must be TCP friendly [15].\nNo flow should consume more than its fair share of the bottleneck bandwidth and each flow must respond to congestion signals (losses) by reducing its transmission rate.\nSecond, it must impose low control overhead.\nThere are many possible sources of such overhead, including probing for available bandwidth between nodes, locating appropriate nodes to peer with for data retrieval and redundantly receiving the same data objects from multiple sources.\nThird, the algorithm should be decentralized and scalable to thousands of participants.\nNo node should be required to learn or maintain global knowledge, for instance global group membership or the set of data objects currently available at all nodes.\nFinally, the approach must be robust to individual failures.\nFor example, the failure of a single node should result only in a temporary reduction in the bandwidth delivered to a small subset of participants; no single failure should result in the complete loss of data for any significant fraction of nodes, as might be the case for a single node failure high up in a multicast overlay tree.\nIn this context, this paper presents the design and evaluation of Bullet, an algorithm for constructing an overlay mesh that attempts to maintain the above properties.\nBullet nodes begin by self-organizing into an overlay tree, which can be constructed by any of a number of existing techniques [1, 18, 21, 24, 34].\nEach Bullet node, starting with the root of the underlying tree, then transmits a disjoint set of data to each of its children, with the goal of maintaining uniform representativeness of each data item across all participants.\nThe level of disjointness is determined by the bandwidth available to each of its children.\nBullet then employs a scalable and efficient algorithm to enable nodes to quickly locate multiple peers capable of transmitting missing data items to the node.\nThus, Bullet layers a high-bandwidth mesh on top of an arbitrary overlay tree.\nDepending on the type of data being transmitted, Bullet can optionally employ a variety of encoding schemes, for instance Erasure codes [7, 26, 25] or Multiple Description Coding (MDC) [17], to efficiently disseminate data, adapt to variable bandwidth, and recover from losses.\nFinally, we use TFRC [15] to transfer data both down the overlay tree and among peers.\nThis ensures that the entire overlay behaves in a congestion-friendly manner, adjusting its transmission rate on a per-connection basis based on prevailing network conditions.\nOne important benefit of our approach is that the bandwidth delivered by the Bullet mesh is somewhat independent of the bandwidth available through the underlying overlay tree.\nOne significant limitation to building high bandwidth overlay trees is the overhead associated with the tree construction protocol.\nIn these trees, it is critical that each participant locates a parent via probing with a high level of available bandwidth because it receives data from only a single source (its parent).\nThus, even once the tree is constructed, nodes must continue their probing to adapt to dynamically changing network conditions.\nWhile bandwidth probing is an active area of research [20, 35], accurate results generally require the transfer of a large amount of data to gain confidence in the results.\nOur approach with Bullet allows receivers to obtain high bandwidth in aggregate using individual transfers from peers spread across the system.\nThus, in Bullet, the bandwidth available from any individual peer is much less important than in any bandwidthoptimized tree.\nFurther, all the bandwidth that would normally be consumed probing for bandwidth can be reallocated to streaming data across the Bullet mesh.\nWe have completed a prototype of Bullet running on top of a number of overlay trees.\nOur evaluation of a 1000-node overlay running across a wide variety of emulated 20,000 node network topologies shows that Bullet can deliver up to twice the bandwidth of a bandwidth-optimized tree (using an o\ufb04ine algorithm and global network topology information), all while remaining TCP friendly.\nWe also deployed our prototype across the PlanetLab [31] wide-area testbed.\nFor these live Internet runs, we find that Bullet can deliver comparable bandwidth performance improvements.\nIn both cases, the overhead of maintaining the Bullet mesh and locating the appropriate disjoint data is limited to 30 Kbps per node, acceptable for our target high-bandwidth, large-scale scenarios.\nThe remainder of this paper is organized as follows.\nSection 2 presents Bullet``s system components including RanSub, informed content delivery, and TFRC.\nSection 3 then details Bullet, an efficient data distribution system for bandwidth intensive applications.\nSection 4 evaluates Bullet``s performance for a variety of network topologies, and compares it to existing multicast techniques.\nSection 5 places our work in the context of related efforts and Section 6 presents our conclusions.\n2.\nSYSTEM COMPONENTS Our approach to high bandwidth data dissemination centers around the techniques depicted in Figure 1.\nFirst, we split the target data stream into blocks which are further subdivided into individual (typically packet-sized) objects.\nDepending on the requirements of the target applications, objects may be encoded [17, 26] to make data recovery more efficient.\nNext, we purposefully disseminate disjoint objects 283 S A C Original data stream: 1 2 3 4 5 6 B 1 2 3 5 1 3 4 6 2 4 5 6 TFRC to determine available BW D E 1 2 5 1 3 4 Figure 1: High-level view of Bullet``s operation.\nto different clients at a rate determined by the available bandwidth to each client.\nWe use the equation-based TFRC protocol to communicate among all nodes in the overlay in a congestion responsive and TCP friendly manner.\nGiven the above techniques, data is spread across the overlay tree at a rate commensurate with the available bandwidth in the overlay tree.\nOur overall goal however is to deliver more bandwidth than would otherwise be available through any tree.\nThus, at this point, nodes require a scalable technique for locating and retrieving disjoint data from their peers.\nIn essence, these perpendicular links across the overlay form a mesh to augment the bandwidth available through the tree.\nIn Figure 1, node D only has sufficient bandwidth to receive 3 objects per time unit from its parent.\nHowever, it is able to locate two peers, C and E, who are able to transmit missing data objects, in this example increasing delivered bandwidth from 3 objects per time unit to 6 data objects per time unit.\nLocating appropriate remote peers cannot require global state or global communication.\nThus, we propose the periodic dissemination of changing, uniformly random subsets of global state to each overlay node once per configurable time period.\nThis random subset contains summary tickets of the objects available at a subset of the nodes in the system.\nEach node uses this information to request data objects from remote nodes that have significant divergence in object membership.\nIt then attempts to establish a number of these peering relationships with the goals of minimizing overlap in the objects received from each peer and maximizing the total useful bandwidth delivered to it.\nIn the remainder of this section, we provide brief background on each of the techniques that we employ as fundamental building blocks for our work.\nSection 3 then presents the details of the entire Bullet architecture.\n2.1 Data Encoding Depending on the type of data being distributed through the system, a number of data encoding schemes can improve system efficiency.\nFor instance, if multimedia data is being distributed to a set of heterogeneous receivers with variable bandwidth, MDC [17] allows receivers obtaining different subsets of the data to still maintain a usable multimedia stream.\nFor dissemination of a large file among a set of receivers, Erasure codes enable receivers not to focus on retrieving every transmitted data packet.\nRather, after obtaining a threshold minimum number of packets, receivers are able to decode the original data stream.\nOf course, Bullet is amenable to a variety of other encoding schemes or even the null encoding scheme, where the original data stream is transmitted best-effort through the system.\nIn this paper, we focus on the benefits of a special class of erasure-correcting codes used to implement the digital fountain [7] approach.\nRedundant Tornado [26] codes are created by performing XOR operations on a selected number of original data packets, and then transmitted along with the original data packets.\nTornado codes require any (1+ )k correctly received packets to reconstruct the original k data packets, with the typically low reception overhead ( ) of 0.03 \u2212 0.05.\nIn return, they provide significantly faster encoding and decoding times.\nAdditionally, the decoding algorithm can run in real-time, and the reconstruction process can start as soon as sufficiently many packets have arrived.\nTornado codes require a predetermined stretch factor (n/k, where n is the total number of encoded packets), and their encoding time is proportional to n. LT codes [25] remove these two limitations, while maintaining a low reception overhead of 0.05.\n2.2 RanSub To address the challenge of locating disjoint content within the system, we use RanSub [24], a scalable approach to distributing changing, uniform random subsets of global state to all nodes of an overlay tree.\nRanSub assumes the presence of some scalable mechanism for efficiently building and maintaining the underlying tree.\nA number of such techniques are described in [1, 18, 21, 24, 34].\nRanSub distributes random subsets of participating nodes throughout the tree using collect and distribute messages.\nCollect messages start at the leaves and propagate up the tree, leaving state at each node along the path to the root.\nDistribute messages start at the root and travel down the tree, using the information left at the nodes during the previous collect round to distribute uniformly random subsets to all participants.\nUsing the collect and distribute messages, RanSub distributes a random subset of participants to each node once per epoch.\nThe lower bound on the length of an epoch is determined by the time it takes to propagate data up then back down the tree, or roughly twice the height of the tree.\nFor appropriately constructed trees, the minimum epoch length will grow with the logarithm of the number of participants, though this is not required for correctness.\nAs part of the distribute message, each participant sends a uniformly random subset of remote nodes, called a distribute set, down to its children.\nThe contents of the distribute set are constructed using the collect set gathered during the previous collect phase.\nDuring this phase, each participant sends a collect set consisting of a random subset of its descendant nodes up the tree to the root along with an estimate of its total number of descendants.\nAfter the root receives all collect sets and the collect phase completes, the distribute phase begins again in a new epoch.\nOne of the key features of RanSub is the Compact operation.\nThis is the process used to ensure that membership in a collect set propagated by a node to its parent is both random and uniformly representative of all members of the sub-tree rooted at that node.\nCompact takes multiple fixedsize subsets and the total population represented by each subset as input, and generates a new fixed-size subset.\nThe 284 A CSC={Cs}, CSD={Ds} CSF={Fs}, CSG={Gs} CSB={Bs,Cs,Ds}, CSE={Es,Fs,Gs} B C E D GF B C A E D GF DSE={As,Bs,Cs, Ds} DSB={As,Es,Fs,Gs} DSG={As,Bs,Cs, Ds,Es,Fs} DSD={As,Bs, Cs,Es,Fs,Gs} DSF={As,Bs,Cs, Ds,Es,Gs} DSC={As,Bs, Ds,Es,Fs,Gs} Figure 2: This example shows the two phases of the RanSub protocol that occur in one epoch.\nThe collect phase is shown on the left, where the collect sets are traveling up the overlay to the root.\nThe distribute phase on the right shows the distribute sets traveling down the overlay to the leaf nodes.\nmembers of the resulting set are uniformly random representatives of the input subset members.\nRanSub offers several ways of constructing distribute sets.\nFor our system, we choose the RanSub-nondescendants option.\nIn this case, each node receives a random subset consisting of all nodes excluding its descendants.\nThis is appropriate for our download structure where descendants are expected to have less content than an ancestor node in most cases.\nA parent creates RanSub-nondescendants distribute sets for each child by compacting collect sets from that child``s siblings and its own distribute set.\nThe result is a distribute set that contains a random subset representing all nodes in the tree except for those rooted at that particular child.\nWe depict an example of RanSub``s collect-distribute process in Figure 2.\nIn the figure, AS stands for node A``s state.\n2.3 Informed Content Delivery Techniques Assuming we can enable a node to locate a peer with disjoint content using RanSub, we need a method for reconciling the differences in the data.\nAdditionally, we require a bandwidth-efficient method with low computational overhead.\nWe chose to implement the approximate reconciliation techniques proposed in [6] for these tasks in Bullet.\nTo describe the content, nodes maintain working sets.\nThe working set contains sequence numbers of packets that have been successfully received by each node over some period of time.\nWe need the ability to quickly discern the resemblance between working sets from two nodes and decide whether a fine-grained reconciliation is beneficial.\nSummary tickets, or min-wise sketches [5], serve this purpose.\nThe main idea is to create a summary ticket that is an unbiased random sample of the working set.\nA summary ticket is a small fixed-size array.\nEach entry in this array is maintained by a specific permutation function.\nThe goal is to have each entry populated by the element with the smallest permuted value.\nTo insert a new element into the summary ticket, we apply the permutation functions in order and update array values as appropriate.\nThe permutation function can be thought of as a specialized hash function.\nThe choice of permutation functions is important as the quality of the summary ticket depends directly on the randomness properties of the permutation functions.\nSince we require them to have a low computational overhead, we use simple permutation functions, such as Pj(x) = (ax+b)mod|U|, where U is the universe size (dependant on the data encoding scheme).\nTo compute the resemblance between two working sets, we compute the number of summary ticket entries that have the same value, and divide it by the total number of entries in the summary tickets.\nFigure 3 shows the way the permutation functions are used to populate the summary ticket.\n12 10 2 27 7 2 18 19 40 1 Workingset 14 42 17 33 38 15 12 P1 33 29 28 44 57 15 P2 22 28 45 61 14 51 Pn... ... Summary ticket minminmin 10 2 Figure 3: Example showing a sample summary ticket being constructed from the working set.\nTo perform approximate fine-grain reconciliation, a peer A sends its digest to peer B and expects to receive packets not described in the digest.\nFor this purpose, we use a Bloom filter [4], a bit array of size m with k independent associated hash functions.\nAn element s from the set of received keys S = {so, s2, ... , sn\u22121} is inserted into the filter by computing the hash values h0, h1, ... , hk\u22121 of s and setting the bits in the array that correspond to the hashed 285 values.\nTo check whether an element x is in the Bloom filter, we hash it using the hash functions and check whether all positions in the bit array are set.\nIf at least one is not set, we know that the Bloom filter does not contain x.\nWhen using Bloom filters, the insertion of different elements might cause all the positions in the bit array corresponding to an element that is not in the set to be nonzero.\nIn this case, we have a false positive.\nTherefore, it is possible that peer B will not send a packet to peer A even though A is missing it.\nOn the other hand, a node will never send a packet that is described in the Bloom filter, i.e. there are no false negatives.\nThe probability of getting a false positive pf on the membership query can be expressed as a function of the ratio m n and the number of hash functions k: pf = (1 \u2212 e\u2212kn/m )k .\nWe can therefore choose the size of the Bloom filter and the number of hash functions that will yield a desired false positive ratio.\n2.4 TCP Friendly Rate Control Although most traffic in the Internet today is best served by TCP, applications that require a smooth sending rate and that have a higher tolerance for loss often find TCP``s reaction to a single dropped packet to be unnecessarily severe.\nTCP Friendly Rate Control, or TFRC, targets unicast streaming multimedia applications with a need for less drastic responses to single packet losses [15].\nTCP halves the sending rate as soon as one packet loss is detected.\nAlternatively, TFRC is an equation-based congestion control protocol that is based on loss events, which consist of multiple packets being dropped within one round-trip time.\nUnlike TCP, the goal of TFRC is not to find and use all available bandwidth, but instead to maintain a relatively steady sending rate while still being responsive to congestion.\nTo guarantee fairness with TCP, TFRC uses the response function that describes the steady-state sending rate of TCP to determine the transmission rate in TFRC.\nThe formula of the TCP response function [27] used in TFRC to describe the sending rate is: T = s R \u00d52p 3 +tRT O(3 \u00d53p 8 )p(1+32p2) This is the expression for the sending rate T in bytes/second, as a function of the round-trip time R in seconds, loss event rate p, packet size s in bytes, and TCP retransmit value tRT O in seconds.\nTFRC senders and receivers must cooperate to achieve a smooth transmission rate.\nThe sender is responsible for computing the weighted round-trip time estimate R between sender and receiver, as well as determining a reasonable retransmit timeout value tRT O.\nIn most cases, using the simple formula tRT O = 4R provides the necessary fairness with TCP.\nThe sender is also responsible for adjusting the sending rate T in response to new values of the loss event rate p reported by the receiver.\nThe sender obtains a new measure for the loss event rate each time a feedback packet is received from the receiver.\nUntil the first loss is reported, the sender doubles its transmission rate each time it receives feedback just as TCP does during slow-start.\nThe main role of the receiver is to send feedback to the sender once per round-trip time and to calculate the loss event rate included in the feedback packets.\nTo obtain the loss event rate, the receiver maintains a loss interval array that contains values for the last eight loss intervals.\nA loss interval is defined as the number of packets received correctly between two loss events.\nThe array is continually updated as losses are detected.\nA weighted average is computed based on the sum of the loss interval values, and the inverse of the sum is the reported loss event rate, p.\nWhen implementing Bullet, we used an unreliable version of TFRC.\nWe wanted a transport protocol that was congestion aware and TCP friendly.\nLost packets were more easily recovered from other sources rather than waiting for a retransmission from the initial sender.\nHence, we eliminate retransmissions from TFRC.\nFurther, TFRC does not aggressively seek newly available bandwidth like TCP, a desirable trait in an overlay tree where there might be multiple competing flows sharing the same links.\nFor example, if a leaf node in the tree tried to aggressively seek out new bandwidth, it could create congestion all the way up to the root of the tree.\nBy using TFRC we were able to avoid these scenarios.\n3.\nBULLET Bullet is an efficient data distribution system for bandwidth intensive applications.\nWhile many current overlay network distribution algorithms use a distribution tree to deliver data from the tree``s root to all other nodes, Bullet layers a mesh on top of an original overlay tree to increase overall bandwidth to all nodes in the tree.\nHence, each node receives a parent stream from its parent in the tree and some number of perpendicular streams from chosen peers in the overlay.\nThis has significant bandwidth impact when a single node in the overlay is unable to deliver adequate bandwidth to a receiving node.\nBullet requires an underlying overlay tree for RanSub to deliver random subsets of participants``s state to nodes in the overlay, informing them of a set of nodes that may be good candidates for retrieving data not available from any of the node``s current peers and parent.\nWhile we also use the underlying tree for baseline streaming, this is not critical to Bullet``s ability to efficiently deliver data to nodes in the overlay.\nAs a result, Bullet is capable of functioning on top of essentially any overlay tree.\nIn our experiments, we have run Bullet over random and bandwidth-optimized trees created o\ufb04ine (with global topological knowledge).\nBullet registers itself with the underlying overlay tree so that it is informed when the overlay changes as nodes come and go or make performance transformations in the overlay.\nAs with streaming overlays trees, Bullet can use standard transports such as TCP and UDP as well as our implementation of TFRC.\nFor the remainder of this paper, we assume the use of TFRC since we primarily target streaming highbandwidth content and we do not require reliable or in-order delivery.\nFor simplicity, we assume that packets originate at the root of the tree and are tagged with increasing sequence numbers.\nEach node receiving a packet will optionally forward it to each of its children, depending on a number of factors relating to the child``s bandwidth and its relative position in the tree.\n3.1 Finding Overlay Peers RanSub periodically delivers subsets of uniformly random selected nodes to each participant in the overlay.\nBullet receivers use these lists to locate remote peers able to transmit missing data items with good bandwidth.\nRanSub messages contain a set of summary tickets that include a small (120 286 bytes) summary of the data that each node contains.\nRanSub delivers subsets of these summary tickets to nodes every configurable epoch (5 seconds by default).\nEach node in the tree maintains a working set of the packets it has received thus far, indexed by sequence numbers.\nNodes associate each working set with a Bloom filter that maintains a summary of the packets received thus far.\nSince the Bloom filter does not exceed a specific size (m) and we would like to limit the rate of false positives, Bullet periodically cleans up the Bloom filter by removing lower sequence numbers from it.\nThis allows us to keep the Bloom filter population n from growing at an unbounded rate.\nThe net effect is that a node will attempt to recover packets for a finite amount of time depending on the packet arrival rate.\nSimilarly, Bullet removes older items that are not needed for data reconstruction from its working set and summary ticket.\nWe use the collect and distribute phases of RanSub to carry Bullet summary tickets up and down the tree.\nIn our current implementation, we use a set size of 10 summary tickets, allowing each collect and distribute to fit well within the size of a non-fragmented IP packet.\nThough Bullet supports larger set sizes, we expect this parameter to be tunable to specific applications'' needs.\nIn practice, our default size of 10 yields favorable results for a variety of overlays and network topologies.\nIn essence, during an epoch a node receives a summarized partial view of the system``s state at that time.\nUpon receiving a random subset each epoch, a Bullet node may choose to peer with the node having the lowest similarity ratio when compared to its own summary ticket.\nThis is done only when the node has sufficient space in its sender list to accept another sender (senders with lackluster performance are removed from the current sender list as described in section 3.4).\nOnce a node has chosen the best node it sends it a peering request containing the requesting node``s Bloom filter.\nSuch a request is accepted by the potential sender if it has sufficient space in its receiver list for the incoming receiver.\nOtherwise, the send request is rejected (space is periodically created in the receiver lists as further described in section 3.4).\n3.2 Recovering Data From Peers Assuming it has space for the new peer, a recipient of the peering request installs the received Bloom filter and will periodically transmit keys not present in the Bloom filter to the requesting node.\nThe requesting node will refresh its installed Bloom filters at each of its sending peers periodically.\nAlong with the fresh filter, a receiving node will also assign a portion of the sequence space to each of its senders.\nIn this way, a node is able the reduce the likelihood that two peers simultaneously transmit the same key to it, wasting network resources.\nA node divides the sequence space in its current working set among each of its senders uniformly.\nAs illustrated in Figure 4, a Bullet receiver views the data space as a matrix of packet sequences containing s rows, where s is its current number of sending peers.\nA receiver periodically (every 5 seconds by default) updates each sender with its current Bloom filter and the range of sequences covered in its Bloom filter.\nThis identifies the range of packets that the receiver is currently interested in recovering.\nOver time, this range shifts as depicted in Figure 4-b).\nIn addition, the receiving node assigns to each sender a row from the matrix, labeled mod.\nA sender will forward packets to b) Mod = 3 00000000000000000000000000000000001111111111111111111111111111111111 7 1 2 8 a) Senders = 7Mod = 2 Low High Time 00000000000000000000000000000000001111111111111111111111111111111111 Figure 4: A Bullet receiver views data as a matrix of sequenced packets with rows equal to the number of peer senders it currently has.\nIt requests data within the range (Low, High) of sequence numbers based on what it has received.\na) The receiver requests a specific row in the sequence matrix from each sender.\nb) As it receives more data, the range of sequences advances and the receiver requests different rows from senders.\nthe receiver that have a sequence number x such that x modulo s equals the mod number.\nIn this fashion, receivers register to receive disjoint data from their sending peers.\nBy specifying ranges and matrix rows, a receiver is unlikely to receive duplicate data items, which would result in wasted bandwidth.\nA duplicate packet, however, may be received when a parent recovers a packet from one of its peers and relays the packet to its children (and descendants).\nIn this case, a descendant would receive the packet out of order and may have already recovered it from one of its peers.\nIn practice, this wasteful reception of duplicate packets is tolerable; less than 10% of all received packets are duplicates in our experiments.\n3.3 Making Data Disjoint We now provide details of Bullet``s mechanisms to increase the ease by which nodes can find disjoint data not provided by parents.\nWe operate on the premise that the main challenge in recovering lost data packets transmitted over an overlay distribution tree lies in finding the peer node housing the data to recover.\nMany systems take a hierarchical approach to this problem, propagating repair requests up the distribution tree until the request can be satisfied.\nThis ultimately leads to scalability issues at higher levels in the hierarchy particularly when overlay links are bandwidthconstrained.\nOn the other hand, Bullet attempts to recover lost data from any non-descendant node, not just ancestors, thereby increasing overall system scalability.\nIn traditional overlay distribution trees, packets are lost by the transmission transport and/or the network.\nNodes attempt to stream data as fast as possible to each child and have essentially no control over which portions of the data stream are dropped by the transport or network.\nAs a result, the streaming subsystem has no control over how many nodes in the system will ultimately receive a particular portion of the data.\nIf few nodes receive a particular range of packets, recovering these pieces of data becomes more difficult, requiring increased communication costs, and leading to scalability problems.\nIn contrast, Bullet nodes are aware of the bandwidth achievable to each of its children using the underlying transport.\nIf 287 a child is unable to receive the streaming rate that the parent receives, the parent consciously decides which portion of the data stream to forward to the constrained child.\nIn addition, because nodes recover data from participants chosen uniformly at random from the set of non-descendants, it is advantageous to make each transmitted packet recoverable from approximately the same number of participant nodes.\nThat is, given a randomly chosen subset of peer nodes, it is with the same probability that each node has a particular data packet.\nWhile not explicitly proven here, we believe that this approach maximizes the probability that a lost data packet can be recovered, regardless of which packet is lost.\nTo this end, Bullet distributes incoming packets among one or more children in hopes that the expected number of nodes receiving each packet is approximately the same.\nA node p maintains for each child, i, a limiting and sending factor, lfi and sfi.\nThese factors determine the proportion of p``s received data rate that it will forward to each child.\nThe sending factor sfi is the portion of the parent stream (rate) that each child should own based on the number of descendants the child has.\nThe more descendants a child has, the larger the portion of received data it should own.\nThe limiting factor lfi represents the proportion of the parent rate beyond the sending factor that each child can handle.\nFor example, a child with one descendant, but high bandwidth would have a low sending factor, but a very high limiting factor.\nThough the child is responsible for owning a small portion of the received data, it actually can receive a large portion of it.\nBecause RanSub collects descendant counts di for each child i, Bullet simply makes a call into RanSub when sending data to determine the current sending factors of its children.\nFor each child i out of k total, we set the sending factor to be: sfi = di\u00c8k j=1 dj .\nIn addition, a node tracks the data successfully transmitted via the transport.\nThat is, Bullet data transport sockets are non-blocking; successful transmissions are send attempts that are accepted by the non-blocking transport.\nIf the transport would block on a send (i.e., transmission of the packet would exceed the TCP-friendly fair share of network resources), the send fails and is counted as an unsuccessful send attempt.\nWhen a data packet is received by a parent, it calculates the proportion of the total data stream that has been sent to each child, thus far, in this epoch.\nIt then assigns ownership of the current packet to the child with sending proportion farthest away from its sfi as illustrated in Figure 5.\nHaving chosen the target of a particular packet, the parent attempts to forward the packet to the child.\nIf the send is not successful, the node must find an alternate child to own the packet.\nThis occurs when a child``s bandwidth is not adequate to fulfill its responsibilities based on its descendants (sfi).\nTo compensate, the node attempts to deterministically find a child that can own the packet (as evidenced by its transport accepting the packet).\nThe net result is that children with more than adequate bandwidth will own more of their share of packets than those with inadequate bandwidth.\nIn the event that no child can accept a packet, it must be dropped, corresponding to the case where the sum of all children bandwidths is inadequate to serve the received foreach child in children { if ( (child->sent / total_sent) < child->sending_factor) target_child = child; } if (!\nsenddata( target_child->addr, msg, size, key)) { // send succeeded target_child->sent++; target_child->child_filter.\ninsert(got_key); sent_packet = 1; } foreach child in children { should_send = 0; if (!\nsent_packet) // transfer ownership should_send = 1; else // test for available bandwidth if ( key % (1.0/child->limiting_factor) == 0 ) should_send = 1; if (should_send) { if (!\nsenddata( child->addr, msg, size, key)) { if (!\nsent_packet) // i received ownership child->sent++; else increase(child->limiting_factor); child->child_filter.\ninsert(got_key); sent_packet = 1; } else // send failed if (sent_packet) // was for extra bw decrease(child->limiting_factor); } } Figure 5: Pseudo code for Bullet``s disjoint data send routine stream.\nWhile making data more difficult to recover, Bullet still allows for recovery of such data to its children.\nThe sending node will cache the data packet and serve it to its requesting peers.\nThis process allows its children to potentially recover the packet from one of their own peers, to whom additional bandwidth may be available.\nOnce a packet has been successfully sent to the owning child, the node attempts to send the packet to all other children depending on the limiting factors lfi.\nFor each child i, a node attempts to forward the packet deterministically if the packet``s sequence modulo 1/lfi is zero.\nEssentially, this identifies which lfi fraction of packets of the received data stream should be forwarded to each child to make use of the available bandwidth to each.\nIf the packet transmission is successful, lfi is increased such that one more packet is to be sent per epoch.\nIf the transmission fails, lfi is decreased by the same amount.\nThis allows children limiting factors to be continuously adjusted in response to changing network conditions.\nIt is important to realize that by maintaining limiting factors, we are essentially using feedback from children (by observing transport behavior) to determine the best data to stop sending during times when a child cannot handle the entire parent stream.\nIn one extreme, if the sum of children bandwidths is not enough to receive the entire parent stream, each child will receive a completely disjoint data stream of packets it owns.\nIn the other extreme, if each 288 child has ample bandwidth, it will receive the entire parent stream as each lfi would settle on 1.0.\nIn the general case, our owning strategy attempts to make data disjoint among children subtrees with the guiding premise that, as much as possible, the expected number of nodes receiving a packet is the same across all packets.\n3.4 Improving the Bullet Mesh Bullet allows a maximum number of peering relationships.\nThat is, a node can have up to a certain number of receivers and a certain number of senders (each defaults to 10 in our implementation).\nA number of considerations can make the current peering relationships sub-optimal at any given time: i) the probabilistic nature of RanSub means that a node may not have been exposed to a sufficiently appropriate peer, ii) receivers greedily choose peers, and iii) network conditions are constantly changing.\nFor example, a sender node may wind up being unable to provide a node with very much useful (non-duplicate) data.\nIn such a case, it would be advantageous to remove that sender as a peer and find some other peer that offers better utility.\nEach node periodically (every few RanSub epochs) evaluates the bandwidth performance it is receiving from its sending peers.\nA node will drop a peer if it is sending too many duplicate packets when compared to the total number of packets received.\nThis threshold is set to 50% by default.\nIf no such wasteful sender is found, a node will drop the sender that is delivering the least amount of useful data to it.\nIt will replace this sender with some other sending peer candidate, essentially reserving a trial slot in its sender list.\nIn this way, we are assured of keeping the best senders seen so far and will eliminate senders whose performance deteriorates with changing network conditions.\nLikewise, a Bullet sender will periodically evaluate its receivers.\nEach receiver updates senders of the total received bandwidth.\nThe sender, knowing the amount of data it has sent to each receiver, can determine which receiver is benefiting the least by peering with this sender.\nThis corresponds to the one receiver acquiring the least portion of its bandwidth through this sender.\nThe sender drops this receiver, creating an empty slot for some other trial receiver.\nThis is similar to the concept of weans presented in [24].\n4.\nEVALUATION We have evaluated Bullet``s performance in real Internet environments as well as the ModelNet [37] IP emulation framework.\nWhile the bulk of our experiments use ModelNet, we also report on our experience with Bullet on the PlanetLab Internet testbed [31].\nIn addition, we have implemented a number of underlying overlay network trees upon which Bullet can execute.\nBecause Bullet performs well over a randomly created overlay tree, we present results with Bullet running over such a tree compared against an o\ufb04ine greedy bottleneck bandwidth tree algorithm using global topological information described in Section 4.1.\nAll of our implementations leverage a common development infrastructure called MACEDON [33] that allows for the specification of overlay algorithms in a simple domainspecific language.\nIt enables the reuse of the majority of common functionality in these distributed systems, including probing infrastructures, thread management, message passing, and debugging environment.\nAs a result, we believe that our comparisons qualitatively show algorithmic differences rather than implementation intricacies.\nOur implementation of the core Bullet logic is under 1000 lines of code in this infrastructure.\nOur ModelNet experiments make use of 50 2Ghz Pentium4``s running Linux 2.4.20 and interconnected with 100 Mbps and 1 Gbps Ethernet switches.\nFor the majority of these experiments, we multiplex one thousand instances (overlay participants) of our overlay applications across the 50 Linux nodes (20 per machine).\nIn ModelNet, packet transmissions are routed through emulators responsible for accurately emulating the hop-by-hop delay, bandwidth, and congestion of a network topology.\nIn our evaluations, we used four 1.4Ghz Pentium III``s running FreeBSD-4.7 as emulators.\nThis platform supports approximately 2-3 Gbps of aggregate simultaneous communication among end hosts.\nFor most of our ModelNet experiments, we use 20,000-node INET-generated topologies [10].\nWe randomly assign our participant nodes to act as clients connected to one-degree stub nodes in the topology.\nWe randomly select one of these participants to act as the source of the data stream.\nPropagation delays in the network topology are calculated based on the relative placement of the network nodes in the plane by INET.\nBased on the classification in [8], we classify network links as being Client-Stub, Stub-Stub, TransitStub, and Transit-Transit depending on their location in the network.\nWe restrict topological bandwidth by setting the bandwidth for each link depending on its type.\nEach type of link has an associated bandwidth range from which the bandwidth is chosen uniformly at random.\nBy changing these ranges, we vary bandwidth constraints in our topologies.\nFor our experiments, we created three different ranges corresponding to low, medium, and high bandwidths relative to our typical streaming rates of 600-1000 Kbps as specified in Table 1.\nWhile the presented ModelNet results are restricted to two topologies with varying bandwidth constraints, the results of experiments with additional topologies all show qualitatively similar behavior.\nWe do not implement any particular coding scheme for our experiments.\nRather, we assume that either each sequence number directly specifies a particular data block and the block offset for each packet, or we are distributing data within the same block for LT Codes, e.g., when distributing a file.\n4.1 Offline Bottleneck Bandwidth Tree One of our goals is to determine Bullet``s performance relative to the best possible bandwidth-optimized tree for a given network topology.\nThis allows us to quantify the possible improvements of an overlay mesh constructed using Bullet relative to the best possible tree.\nWhile we have not yet proven this, we believe that this problem is NP-hard.\nThus, in this section we present a simple greedy o\ufb04ine algorithm to determine the connectivity of a tree likely to deliver a high level of bandwidth.\nIn practice, we are not aware of any scalable online algorithms that are able to deliver the bandwidth of an o\ufb04ine algorithm.\nAt the same time, trees constructed by our algorithm tend to be long and skinny making them less resilient to failures and inappropriate for delay sensitive applications (such as multimedia streaming).\nIn addition to any performance comparisons, a Bullet mesh has much lower depth than the bottleneck tree and is more resilient to failure, as discussed in Section 4.6.\n289 Topology classification Client-Stub Stub-Stub Transit-Stub Transit-Transit Low bandwidth 300-600\u00a0500-1000 1000-2000\u00a02000-4000 Medium bandwidth 800-2800\u00a01000-4000 1000-4000\u00a05000-10000 High bandwidth 1600-5600\u00a02000-8000 2000-8000\u00a010000-20000 Table 1: Bandwidth ranges for link types used in our topologies expressed in Kbps.\nSpecifically, we consider the following problem: given complete knowledge of the topology (individual link latencies, bandwidth, and packet loss rates), what is the overlay tree that will deliver the highest bandwidth to a set of predetermined overlay nodes?\nWe assume that the throughput of the slowest overlay link (the bottleneck link) determines the throughput of the entire tree.\nWe are, therefore, trying to find the directed overlay tree with the maximum bottleneck link.\nAccordingly, we refer to this problem as the overlay maximum bottleneck tree (OMBT).\nIn a simplified case, assuming that congestion only exists on access links and there are no lossy links, there exists an optimal algorithm [23].\nIn the more general case of contention on any physical link, and when the system is allowed to choose the routing path between the two endpoints, this problem is known to be NP-hard [12], even in the absence of link losses.\nFor the purposes of this paper, our goal is to determine a good overlay streaming tree that provides each overlay participant with substantial bandwidth, while avoiding overlay links with high end-to-end loss rates.\nWe make the following assumptions: 1.\nThe routing path between any two overlay participants is fixed.\nThis closely models the existing overlay network model with IP for unicast routing.\n2.\nThe overlay tree will use TCP-friendly unicast connections to transfer data point-to-point.\n3.\nIn the absence of other flows, we can estimate the throughput of a TCP-friendly flow using a steady-state formula [27].\n4.\nWhen several (n) flows share the same bottleneck link, each flow can achieve throughput of at most c n , where c is the physical capacity of the link.\nGiven these assumptions, we concentrate on estimating the throughput available between two participants in the overlay.\nWe start by calculating the throughput using the steady-state formula.\nWe then route the flow in the network, and consider the physical links one at a time.\nOn each physical link, we compute the fair-share for each of the competing flows.\nThe throughput of an overlay link is then approximated by the minimum of the fair-shares along the routing path, and the formula rate.\nIf some flow does not require the same share of the bottleneck link as other competing flows (i.e., its throughput might be limited by losses elsewhere in the network), then the other flows might end up with a greater share than the one we compute.\nWe do not account for this, as the major goal of this estimate is simply to avoid lossy and highly congested physical links.\nMore formally, we define the problem as follows: Overlay Maximum Bottleneck Tree (OMBT).\nGiven a physical network represented as a graph G = (V, E), set of overlay participants P \u2282 V , source node (s \u2208 P), bandwidth B : E \u2192 R+ , loss rate L : E \u2192 [0, 1], propagation delay D : E \u2192 R+ of each link, set of possible overlay links O = {(v, w) | v, w \u2208 P, v = w}, routing table RT : O \u00d7 E \u2192 {0, 1}, find the overlay tree T = {o | o \u2208 O} (|T| = |P| \u2212 1, \u2200v \u2208 P there exists a path ov = s \u2740 v) that maximizes min o|o\u2208T (min(f(o), min e|e\u2208o b(e) |{p | p \u2208 T, e \u2208 p}| )) where f(o) is the TCP steady-state sending rate, computed from round-trip time d(o) = \u00c8e\u2208o d(e) + \u00c8e\u2208o d(e) (given overlay link o = (v, w), o = (w, v)), and loss rate l(o) = 1 \u2212 \u00c9e\u2208o (1 \u2212 l(e)).\nWe write e \u2208 o to express that link e is included in the o``s routing path (RT(o, e) = 1).\nAssuming that we can estimate the throughput of a flow, we proceed to formulate a greedy OMBT algorithm.\nThis algorithm is non-optimal, but a similar approach was found to perform well [12].\nOur algorithm is similar to the Widest Path Heuristic (WPH) [12], and more generally to Prim``s MST algorithm [32].\nDuring its execution, we maintain the set of nodes already in the tree, and the set of remaining nodes.\nTo grow the tree, we consider all the overlay links leading from the nodes in the tree to the remaining nodes.\nWe greedily pick the node with the highest throughput overlay link.\nUsing this overlay link might cause us to route traffic over physical links traversed by some other tree flows.\nSince we do not re-examine the throughput of nodes that are already in the tree, they might end up being connected to the tree with slower overlay links than initially estimated.\nHowever, by attaching the node with the highest residual bandwidth at every step, we hope to lessen the effects of after-the-fact physical link sharing.\nWith the synthetic topologies we use for our emulation environment, we have not found this inaccuracy to severely impact the quality of the tree.\n4.2 Bullet vs. Streaming We have implemented a simple streaming application that is capable of streaming data over any specified tree.\nIn our implementation, we are able to stream data through overlay trees using UDP, TFRC, or TCP.\nFigure 6 shows average bandwidth that each of 1000 nodes receives via this streaming as time progresses on the x-axis.\nIn this example, we use TFRC to stream 600 Kbps over our o\ufb04ine bottleneck bandwidth tree and a random tree (other random trees exhibit qualitatively similar behavior).\nIn these experiments, streaming begins 100 seconds into each run.\nWhile the random tree delivers an achieved bandwidth of under 100 Kbps, our o\ufb04ine algorithm overlay delivers approximately 400 Kbps of data.\nFor this experiment, bandwidths were set to the medium range from Table 1.\nWe believe that any degree-constrained online bandwidth overlay tree algorithm would exhibit similar (or lower) behavior to our bandwidth290 0 200 400 600 800 1000 0 50\u00a0100\u00a0150\u00a0200 250\u00a0300\u00a0350\u00a0400 Bandwidth(Kbps) Time (s) Bottleneck bandwidth tree Random tree Figure 6: Achieved bandwidth over time for TFRC streaming over the bottleneck bandwidth tree and a random tree.\noptimized overlay.\nHence, Bullet``s goal is to overcome this bandwidth limit by allowing for the perpendicular reception of data and by utilizing disjoint data flows in an attempt to match or exceed the performance of our o\ufb04ine algorithm.\nTo evaluate Bullet``s ability to exceed the bandwidth achievable via tree distribution overlays, we compare Bullet running over a random overlay tree to the streaming behavior shown in Figure 6.\nFigure 7 shows the average bandwidth received by each node (labeled Useful total) with standard deviation.\nThe graph also plots the total amount of data received and the amount of data a node receives from its parent.\nFor this topology and bandwidth setting, Bullet was able to achieve an average bandwidth of 500 Kbps, fives times that achieved by the random tree and more than 25% higher than the o\ufb04ine bottleneck bandwidth algorithm.\nFurther, the total bandwidth (including redundant data) received by each node is only slightly higher than the useful content, meaning that Bullet is able to achieve high bandwidth while wasting little network resources.\nBullet``s use of TFRC in this example ensures that the overlay is TCP friendly throughout.\nThe average per-node control overhead is approximately 30 Kbps.\nBy tracing certain packets as they move through the system, we are able to acquire link stress estimates of our system.\nThough the link stress can be different for each packet since each can take a different path through the overlay mesh, we average link stress due to each traced packet.\nFor this experiment, Bullet has an average link stress of approximately 1.5 with an absolute maximum link stress of 22.\nThe standard deviation in most of our runs is fairly high because of the limited bandwidth randomly assigned to some Client-Stub and Stub-Stub links.\nWe feel that this is consistent with real Internet behavior where clients have widely varying network connectivity.\nA time slice is shown in Figure 8 that plots the CDF of instantaneous bandwidths that each node receives.\nThe graph shows that few client nodes receive inadequate bandwidth even though they are bandwidth constrained.\nThe distribution rises sharply starting at approximately 500 Kbps.\nThe vast majority of nodes receive a stream of 500-600 Kbps.\nWe have evaluated Bullet under a number of bandwidth constraints to determine how Bullet performs relative to the 0 200 400 600 800 1000 0 50\u00a0100\u00a0150\u00a0200 250\u00a0300\u00a0350\u00a0400 450 500 Bandwidth(Kbps) Time (s) Raw total Useful total From parent Figure 7: Achieved bandwidth over time for Bullet over a random tree.\n0 0.2 0.4 0.6 0.8 1 0 100\u00a0200\u00a0300\u00a0400 500\u00a0600\u00a0700\u00a0800 Percentageofnodes Bandwidth(Kbps) Figure 8: CDF of instantaneous achieved bandwidth at time 430 seconds.\navailable bandwidth of the underlying topology.\nTable 1 describes representative bandwidth settings for our streaming rate of 600 Kbps.\nThe intent of these settings is to show a scenario where more than enough bandwidth is available to achieve a target rate even with traditional tree streaming, an example of where it is slightly not sufficient, and one in which the available bandwidth is quite restricted.\nFigure 9 shows achieved bandwidths for Bullet and the bottleneck bandwidth tree over time generated from topologies with bandwidths in each range.\nIn all of our experiments, Bullet outperforms the bottleneck bandwidth tree by a factor of up to 100%, depending on how much bandwidth is constrained in the underlying topology.\nIn one extreme, having more than ample bandwidth, Bullet and the bottleneck bandwidth tree are both able to stream at the requested rate (600 Kbps in our example).\nIn the other extreme, heavily constrained topologies allow Bullet to achieve twice the bandwidth achievable via the bottleneck bandwidth tree.\nFor all other topologies, Bullet``s benefits are somewhere in between.\nIn our example, Bullet running over our medium-constrained bandwidth topology is able to outperform the bottleneck bandwidth tree by a factor of 25%.\nFurther, we stress that we believe it would 291 0 200 400 600 800 1000 0 50\u00a0100\u00a0150\u00a0200 250\u00a0300\u00a0350\u00a0400 Bandwidth(Kbps) Time (s) Bullet - High Bandwidth Bottleneck tree - High Bandwidth Bullet - Medium Bandwidth Bottleneck tree - Medium Bandwidth Bullet - Low Bandwidth Bottleneck tree - Low Bandwidth Figure 9: Achieved bandwidth for Bullet and bottleneck tree over time for high, medium, and low bandwidth topologies.\nbe extremely difficult for any online tree-based algorithm to exceed the bandwidth achievable by our o\ufb04ine bottleneck algorithm that makes use of global topological information.\nFor instance, we built a simple bandwidth optimizing overlay tree construction based on Overcast [21].\nThe resulting dynamically constructed trees never achieved more than 75% of the bandwidth of our own o\ufb04ine algorithm.\n4.3 Creating Disjoint Data Bullet``s ability to deliver high bandwidth levels to nodes depends on its disjoint transmission strategy.\nThat is, when bandwidth to a child is limited, Bullet attempts to send the correct portions of data so that recovery of the lost data is facilitated.\nA Bullet parent sends different data to its children in hopes that each data item will be readily available to nodes spread throughout its subtree.\nIt does so by assigning ownership of data objects to children in a manner that makes the expected number of nodes holding a particular data object equal for all data objects it transmits.\nFigure 10 shows the resulting bandwidth over time for the non-disjoint strategy in which a node (and more importantly, the root of the tree) attempts to send all data to each of its children (subject to independent losses at individual child links).\nBecause the children transports throttle the sending rate at each parent, some data is inherently sent disjointly (by chance).\nBy not explicitly choosing which data to send its child, this approach deprives Bullet of 25% of its bandwidth capability, when compared to the case when our disjoint strategy is enabled in Figure 7.\n4.4 Epidemic Approaches In this section, we explore how Bullet compares to data dissemination approaches that use some form of epidemic routing.\nWe implemented a form of gossiping, where a node forwards non-duplicate packets to a randomly chosen number of nodes in its local view.\nThis technique does not use a tree for dissemination, and is similar to lpbcast [14] (recently improved to incorporate retrieval of data objects [13]).\nWe do not disseminate packets every T seconds; instead we forward them as soon as they arrive.\n0 200 400 600 800 1000 0 50\u00a0100\u00a0150\u00a0200 250\u00a0300\u00a0350\u00a0400 450 500 Bandwidth(Kbps) Time (s) Raw total Useful total From parent Figure 10: Achieved bandwidth over time using nondisjoint data transmission.\nWe also implemented a pbcast-like [2] approach for retrieving data missing from a data distribution tree.\nThe idea here is that nodes are expected to obtain most of their data from their parent.\nNodes then attempt to retrieve any missing data items through gossiping with random peers.\nInstead of using gossiping with a fixed number of rounds for each packet, we use anti-entropy with a FIFO Bloom filter to attempt to locate peers that hold any locally missing data items.\nTo make our evaluation conservative, we assume that nodes employing gossip and anti-entropy recovery are able to maintain full group membership.\nWhile this might be difficult in practice, we assume that RanSub [24] could also be applied to these ideas, specifically in the case of anti-entropy recovery that employs an underlying tree.\nFurther, we also allow both techniques to reuse other aspects of our implementation: Bloom filters, TFRC transport, etc..\nTo reduce the number of duplicate packets, we use less peers in each round (5) than Bullet (10).\nFor our configuration, we experimentally found that 5 peers results in the best performance with the lowest overhead.\nIn our experiments, increasing the number of peers did not improve the average bandwidth achieved throughout the system.\nTo allow TFRC enough time to ramp up to the appropriate TCP-friendly sending rate, we set the epoch length for anti-entropy recovery to 20 seconds.\nFor these experiments, we use a 5000-node INET topology with no explicit physical link losses.\nWe set link bandwidths according to the medium range from Table 1, and randomly assign 100 overlay participants.\nThe randomly chosen root either streams at 900 Kbps (over a random tree for Bullet and greedy bottleneck tree for anti-entropy recovery), or sends packets at that rate to randomly chosen nodes for gossiping.\nFigure 11 shows the resulting bandwidth over time achieved by Bullet and the two epidemic approaches.\nAs expected, Bullet comes close to providing the target bandwidth to all participants, achieving approximately 60 percent more then gossiping and streaming with anti-entropy.\nThe two epidemic techniques send an excessive number of duplicates, effectively reducing the useful bandwidth provided to each node.\nMore importantly, both approaches assign equal significance to other peers, regardless of the available band292 0 500 1000 1500 2000 0 50\u00a0100\u00a0150\u00a0200 250 300 Bandwidth(Kbps) Time (s) Push gossiping raw Streaming w/AE raw Bullet raw Bullet useful Push gossiping useful Streaming w/AE useful Figure 11: Achieved bandwidth over time for Bullet and epidemic approaches.\nwidth and the similarity ratio.\nBullet, on the other hand, establishes long-term connections with peers that provide good bandwidth and disjoint content, and avoids most of the duplicates by requesting disjoint data from each node``s peers.\n4.5 Bullet on a Lossy Network To evaluate Bullet``s performance under more lossy network conditions, we have modified our 20,000-node topologies used in our previous experiments to include random packet losses.\nModelNet allows the specification of a packet loss rate in the description of a network link.\nOur goal by modifying these loss rates is to simulate queuing behavior when the network is under load due to background network traffic.\nTo effect this behavior, we first modify all non-transit links in each topology to have a packet loss rate chosen uniformly random from [0, 0.003] resulting in a maximum loss rate of 0.3%.\nTransit links are likewise modified, but with a maximum loss rate of 0.1%.\nSimilar to the approach in [28], we randomly designated 5% of the links in the topologies as overloaded and set their loss rates uniformly random from [0.05, 0.1] resulting in a maximum packet loss rate of 10%.\nFigure 12 shows achieved bandwidths for streaming over Bullet and using our greedy o\ufb04ine bottleneck bandwidth tree.\nBecause losses adversely affect the bandwidth achievable over TCP-friendly transport and since bandwidths are strictly monotonically decreasing over a streaming tree, treebased algorithms perform considerably worse than Bullet when used on a lossy network.\nIn all cases, Bullet delivers at least twice as much bandwidth than the bottleneck bandwidth tree.\nAdditionally, losses in the low bandwidth topology essentially keep the bottleneck bandwidth tree from delivering any data, an artifact that is avoided by Bullet.\n4.6 Performance Under Failure In this section, we discuss Bullet``s behavior in the face of node failure.\nIn contrast to streaming distribution trees that must quickly detect and make tree transformations to overcome failure, Bullet``s failure resilience rests on its ability to maintain a higher level of achieved bandwidth by virtue of perpendicular (peer) streaming.\nWhile all nodes under a failed node in a distribution tree will experience a temporary 0 200 400 600 800 1000 0 50\u00a0100\u00a0150\u00a0200 250\u00a0300\u00a0350\u00a0400 Bandwidth(Kbps) Time (s) Bullet - High Bandwidth Bullet - Medium Bandwidth Bottleneck tree - High Bandwidth Bottleneck tree - Medium Bandwidth Bullet - Low Bandwidth Bottleneck tree - Low Bandwidth Figure 12: Achieved bandwidths for Bullet and bottleneck bandwidth tree over a lossy network topology.\ndisruption in service, Bullet nodes are able compensate for this by receiving data from peers throughout the outage.\nBecause Bullet, and, more importantly, RanSub makes use of an underlying tree overlay, part of Bullet``s failure recovery properties will depend on the failure recovery behavior of the underlying tree.\nFor the purposes of this discussion, we simply assume the worst-case scenario where an underlying tree has no failure recovery.\nIn our failure experiments, we fail one of root``s children (with 110 of the total 1000 nodes as descendants) 250 seconds after data streaming is started.\nBy failing one of root``s children, we are able to show Bullet``s worst-case performance under a single node failure.\nIn our first scenario, we disable failure detection in RanSub so that after a failure occurs, Bullet nodes request data only from their current peers.\nThat is, at this point, RanSub stops functioning and no new peer relationships are created for the remainder of the run.\nFigure 13 shows Bullet``s achieved bandwidth over time for this case.\nWhile the average achieved rate drops from 500 Kbps to 350 Kbps, most nodes (including the descendants of the failed root child) are able to recover a large portion of the data rate.\nNext, we enable RanSub failure detection that recognizes a node``s failure when a RanSub epoch has lasted longer than the predetermined maximum (5 seconds for this test).\nIn this case, the root simply initiates the next distribute phase upon RanSub timeout.\nThe net result is that nodes that are not descendants of the failed node will continue to receive updated random subsets allowing them to peer with appropriate nodes reflecting the new network conditions.\nAs shown in Figure 14, the failure causes a negligible disruption in performance.\nWith RanSub failure detection enabled, nodes quickly learn of other nodes from which to receive data.\nOnce such recovery completes, the descendants of the failed node use their already established peer relationships to compensate for their ancestor``s failure.\nHence, because Bullet is an overlay mesh, its reliability characteristics far exceed that of typical overlay distribution trees.\n4.7 PlanetLab This section contains results from the deployment of Bullet over the PlanetLab [31] wide-area network testbed.\nFor 293 0 200 400 600 800 1000 0 50\u00a0100\u00a0150\u00a0200 250\u00a0300\u00a0350\u00a0400 Bandwidth(Kbps) Time (s) Bandwidth received Useful total From parent Figure 13: Bandwidth over time with a worst-case node failure and no RanSub recovery.\n0 200 400 600 800 1000 0 50\u00a0100\u00a0150\u00a0200 250\u00a0300\u00a0350\u00a0400 Bandwidth(Kbps) Time (s) Bandwidth received Useful total From parent Figure 14: Bandwidth over time with a worst-case node failure and RanSub recovery enabled.\nour first experiment, we chose 47 nodes for our deployment, with no two machines being deployed at the same site.\nSince there is currently ample bandwidth available throughout the PlanetLab overlay (a characteristic not necessarily representative of the Internet at large), we designed this experiment to show that Bullet can achieve higher bandwidth than an overlay tree when the source is constrained, for instance in cases of congestion on its outbound access link, or of overload by a flash-crowd.\nWe did this by choosing a root in Europe connected to PlanetLab with fairly low bandwidth.\nThe node we selected was in Italy (cs.unibo.it) and we had 10 other overlay nodes in Europe.\nWithout global knowledge of the topology in PlanetLab (and the Internet), we are, of course, unable to produce our greedy bottleneck bandwidth tree for comparison.\nWe ran Bullet over a random overlay tree for 300 seconds while attempting to stream at a rate of 1.5 Mbps.\nWe waited 50 seconds before starting to stream data to allow nodes to successfully join the tree.\nWe compare the performance of Bullet to data streaming over multiple handcrafted trees.\nFigure 15 shows our results for two such trees.\nThe good tree has all nodes in Europe located high in the tree, close to the root.\nWe used pathload [20] to measure the 0 200 400 600 800 1000 1200 0 50\u00a0100\u00a0150\u00a0200 250 Bandwidth(Kbps) Time (s) Bullet Good Tree Worst Tree Figure 15: Achieved bandwidth over time for Bullet and TFRC streaming over different trees on PlanetLab with a root in Europe.\navailable bandwidth between the root and all other nodes.\nNodes with high bandwidth measurements were placed close to the root.\nIn this case, we are able to achieve a bandwidth of approximately 300 Kbps.\nThe worst tree was created by setting the root``s children to be the three nodes with the worst bandwidth characteristics from the root as measured by pathload.\nAll subsequent levels in the tree were set in this fashion.\nFor comparison, we replaced all nodes in Europe from our topology with nodes in the US, creating a topology that only included US nodes with high bandwidth characteristics.\nAs expected, Bullet was able to achieve the full 1.5 Mbps rate in this case.\nA well constructed tree over this highbandwidth topology yielded slightly lower than 1.5 Mbps, verifying that our approach does not sacrifice performance under high bandwidth conditions and improves performance under constrained bandwidth scenarios.\n5.\nRELATED WORK Snoeren et al. [36] use an overlay mesh to achieve reliable and timely delivery of mission-critical data.\nIn this system, every node chooses n parents from which to receive duplicate packet streams.\nSince its foremost emphasis is reliability, the system does not attempt to improve the bandwidth delivered to the overlay participants by sending disjoint data at each level.\nFurther, during recovery from parent failure, it limits an overlay router``s choice of parents to nodes with a level number that is less than its own level number.\nThe power of perpendicular downloads is perhaps best illustrated by Kazaa [22], the popular peer-to-peer file swapping network.\nKazaa nodes are organized into a scalable, hierarchical structure.\nIndividual users search for desired content in the structure and proceed to simultaneously download potentially disjoint pieces from nodes that already have it.\nSince Kazaa does not address the multicast communication model, a large fraction of users downloading the same file would consume more bandwidth than nodes organized into the Bullet overlay structure.\nKazaa does not use erasure coding; therefore it may take considerable time to locate the last few bytes.\n294 BitTorrent [3] is another example of a file distribution system currently deployed on the Internet.\nIt utilizes trackers that direct downloaders to random subsets of machines that already have portions of the file.\nThe tracker poses a scalability limit, as it continuously updates the systemwide distribution of the file.\nLowering the tracker communication rate could hurt the overall system performance, as information might be out of date.\nFurther, BitTorrent does not employ any strategy to disseminate data to different regions of the network, potentially making it more difficult to recover data depending on client access patterns.\nSimilar to Bullet, BitTorrent incorporates the notion of choking at each node with the goal of identifying receivers that benefit the most by downloading from that particular source.\nFastReplica [11] addresses the problem of reliable and efficient file distribution in content distribution networks (CDNs).\nIn the basic algorithm, nodes are organized into groups of fixed size (n), with full group membership information at each node.\nTo distribute the file, a node splits it into n equal-sized portions, sends the portions to other group members, and instructs them to download the missing pieces in parallel from other group members.\nSince only a fixed portion of the file is transmitted along each of the overlay links, the impact of congestion is smaller than in the case of tree distribution.\nHowever, since it treats all paths equally, FastReplica does not take full advantage of highbandwidth overlay links in the system.\nSince it requires file store-and-forward logic at each level of the hierarchy necessary for scaling the system, it may not be applicable to high-bandwidth streaming.\nThere are numerous protocols that aim to add reliability to IP multicast.\nIn Scalable Reliable Multicast (SRM) [16], nodes multicast retransmission requests for missed packets.\nTwo techniques attempt to improve the scalability of this approach: probabilistic choice of retransmission timeouts, and organization of receivers into hierarchical local recovery groups.\nHowever, it is difficult to find appropriate timer values and local scoping settings (via the TTL field) for a wide range of topologies, number of receivers, etc. even when adaptive techniques are used.\nOne recent study [2] shows that SRM may have significant overhead due to retransmission requests.\nBullet is closely related to efforts that use epidemic data propagation techniques to recover from losses in the nonreliable IP-multicast tree.\nIn pbcast [2], a node has global group membership, and periodically chooses a random subset of peers to send a digest of its received packets.\nA node that receives the digest responds to the sender with the missing packets in a last-in, first-out fashion.\nLbpcast [14] addresses pbcast``s scalability issues (associated with global knowledge) by constructing, in a decentralized fashion, a partial group membership view at each node.\nThe average size of the views is engineered to allow a message to reach all participants with high probability.\nSince lbpcast does not require an underlying tree for data distribution and relies on the push-gossiping model, its network overhead can be quite high.\nCompared to the reliable multicast efforts, Bullet behaves favorably in terms of the network overhead because nodes do not blindly request retransmissions from their peers.\nInstead, Bullet uses the summary views it obtains through RanSub to guide its actions toward nodes with disjoint content.\nFurther, a Bullet node splits the retransmission load between all of its peers.\nWe note that pbcast nodes contain a mechanism to rate-limit retransmitted packets and to send different packets in response to the same digest.\nHowever, this does not guarantee that packets received in parallel from multiple peers will not be duplicates.\nMore importantly, the multicast recovery methods are limited by the bandwidth through the tree, while Bullet strives to provide more bandwidth to all receivers by making data deliberately disjoint throughout the tree.\nNarada [19] builds a delay-optimized mesh interconnecting all participating nodes and actively measures the available bandwidth on overlay links.\nIt then runs a standard routing protocol on top of the overlay mesh to construct forwarding trees using each node as a possible source.\nNarada nodes maintain global knowledge about all group participants, limiting system scalability to several tens of nodes.\nFurther, the bandwidth available through a Narada tree is still limited to the bandwidth available from each parent.\nOn the other hand, the fundamental goal of Bullet is to increase bandwidth through download of disjoint data from multiple peers.\nOvercast [21] is an example of a bandwidth-efficient overlay tree construction algorithm.\nIn this system, all nodes join at the root and migrate down to the point in the tree where they are still able to maintain some minimum level of bandwidth.\nBullet is expected to be more resilient to node departures than any tree, including Overcast.\nInstead of a node waiting to get the data it missed from a new parent, a node can start getting data from its perpendicular peers.\nThis transition is seamless, as the node that is disconnected from its parent will start demanding more missing packets from its peers during the standard round of refreshing its filters.\nOvercast convergence time is limited by probes to immediate siblings and ancestors.\nBullet is able to provide approximately a target bandwidth without having a fully converged tree.\nIn parallel to our own work, SplitStream [9] also has the goal of achieving high bandwidth data dissemination.\nIt operates by splitting the multicast stream into k stripes, transmitting each stripe along a separate multicast tree built using Scribe [34].\nThe key design goal of the tree construction mechanism is to have each node be an intermediate node in at most one tree (while observing both inbound and outbound node bandwidth constraints), thereby reducing the impact of a single node``s sudden departure on the rest of the system.\nThe join procedure can potentially sacrifice the interior-node-disjointness achieved by Scribe.\nPerhaps more importantly, SplitStream assumes that there is enough available bandwidth to carry each stripe on every link of the tree, including the links between the data source and the roots of individual stripe trees independently chosen by Scribe.\nTo some extent, Bullet and SplitStream are complementary.\nFor instance, Bullet could run on each of the stripes to maximize the bandwidth delivered to each node along each stripe.\nCoopNet [29] considers live content streaming in a peerto-peer environment, subject to high node churn.\nConsequently, the system favors resilience over network efficiency.\nIt uses a centralized approach for constructing either random or deterministic node-disjoint (similar to SplitStream) trees, and it includes an MDC [17] adaptation framework based on scalable receiver feedback that attempts to maximize the signal-to-noise ratio perceived by receivers.\nIn the case of on-demand streaming, CoopNet [30] addresses 295 the flash-crowd problem at the central server by redirecting incoming clients to a fixed number of nodes that have previously retrieved portions of the same content.\nCompared to CoopNet, Bullet provides nodes with a uniformly random subset of the system-wide distribution of the file.\n6.\nCONCLUSIONS Typically, high bandwidth overlay data streaming takes place over a distribution tree.\nIn this paper, we argue that, in fact, an overlay mesh is able to deliver fundamentally higher bandwidth.\nOf course, a number of difficult challenges must be overcome to ensure that nodes in the mesh do not repeatedly receive the same data from peers.\nThis paper presents the design and implementation of Bullet, a scalable and efficient overlay construction algorithm that overcomes this challenge to deliver significant bandwidth improvements relative to traditional tree structures.\nSpecifically, this paper makes the following contributions: \u2022 We present the design and analysis of Bullet, an overlay construction algorithm that creates a mesh over any distribution tree and allows overlay participants to achieve a higher bandwidth throughput than traditional data streaming.\nAs a related benefit, we eliminate the overhead required to probe for available bandwidth in traditional distributed tree construction techniques.\n\u2022 We provide a technique for recovering missing data from peers in a scalable and efficient manner.\nRanSub periodically disseminates summaries of data sets received by a changing, uniformly random subset of global participants.\n\u2022 We propose a mechanism for making data disjoint and then distributing it in a uniform way that makes the probability of finding a peer containing missing data equal for all nodes.\n\u2022 A large-scale evaluation of 1000 overlay participants running in an emulated 20,000 node network topology, as well as experimentation on top of the PlanetLab Internet testbed, shows that Bullet running over a random tree can achieve twice the throughput of streaming over a traditional bandwidth tree.\nAcknowledgments We would like to thank David Becker for his invaluable help with our ModelNet experiments and Ken Yocum for his help with ModelNet emulation optimizations.\nIn addition, we thank our shepherd Barbara Liskov and our anonymous reviewers who provided excellent feedback.\n7.\nREFERENCES [1] Suman Banerjee, Bobby Bhattacharjee, and Christopher Kommareddy.\nScalable Application Layer Multicast.\nIn Proceedings of ACM SIGCOMM, August 2002.\n[2] Kenneth Birman, Mark Hayden, Oznur Ozkasap, Zhen Xiao, Mihai Budiu, and Yaron Minsky.\nBimodal Multicast.\nACM Transaction on Computer Systems, 17(2), May 1999.\n[3] Bittorrent.\nhttp://bitconjurer.org/BitTorrent.\n[4] Burton Bloom.\nSpace/Time Trade-offs in Hash Coding with Allowable Errors.\nCommunication of ACM, 13(7):422-426, July 1970.\n[5] Andrei Broder.\nOn the Resemblance and Containment of Documents.\nIn Proceedings of Compression and Complexity of Sequences (SEQUENCES``97), 1997.\n[6] John W. Byers, Jeffrey Considine, Michael Mitzenmacher, and Stanislav Rost.\nInformed Content Delivery Across Adaptive Overlay Networks.\nIn Proceedings of ACM SIGCOMM, August 2002.\n[7] John W. Byers, Michael Luby, Michael Mitzenmacher, and Ashutosh Rege.\nA Digital Fountain Approach to Reliable Distribution of Bulk Data.\nIn SIGCOMM, pages 56-67, 1998.\n[8] Ken Calvert, Matt Doar, and Ellen W. Zegura.\nModeling Internet Topology.\nIEEE Communications Magazine, June 1997.\n[9] Miguel Castro, Peter Druschel, Anne-Marie Kermarrec, Animesh Nandi, Antony Rowstron, and Atul Singh.\nSplitstream: High-bandwidth Content Distribution in Cooperative Environments.\nIn Proceedings of the 19th ACM Symposium on Operating System Principles, October 2003.\n[10] Hyunseok Chang, Ramesh Govindan, Sugih Jamin, Scott Shenker, and Walter Willinger.\nTowards Capturing Representative AS-Level Internet Topologies.\nIn Proceedings of ACM SIGMETRICS, June 2002.\n[11] Ludmila Cherkasova and Jangwon Lee.\nFastReplica: Efficient Large File Distribution within Content Delivery Networks.\nIn 4th USENIX Symposium on Internet Technologies and Systems, March 2003.\n[12] Reuven Cohen and Gideon Kaempfer.\nA Unicast-based Approach for Streaming Multicast.\nIn INFOCOM, pages 440-448, 2001.\n[13] Patrick Eugster, Sidath Handurukande, Rachid Guerraoui, Anne-Marie Kermarrec, and Petr Kouznetsov.\nLightweight Probabilistic Broadcast.\nTo appear in ACM Transactions on Computer Systems.\n[14] Patrick Eugster, Sidath Handurukande, Rachid Guerraoui, Anne-Marie Kermarrec, and Petr Kouznetsov.\nLightweight Probabilistic Broadcast.\nIn Proceedings of The International Conference on Dependable Systems and Networks (DSN), 2001.\n[15] Sally Floyd, Mark Handley, Jitendra Padhye, and Jorg Widmer.\nEquation-based congestion control for unicast applications.\nIn SIGCOMM 2000, pages 43-56, Stockholm, Sweden, August 2000.\n[16] Sally Floyd, Van Jacobson, Ching-Gung Liu, Steven McCanne, and Lixia Zhang.\nA Reliable Multicast Framework for Light-weight Sessions and Application Level Framing.\nIEEE/ACM Transactions on Networking, 5(6):784-803, 1997.\n[17] Vivek K Goyal.\nMultiple Description Coding: Compression Meets the Network.\nIEEE Signal Processing Mag., pages 74-93, May 2001.\n[18] Yang hua Chu, Sanjay Rao, and Hui Zhang.\nA Case For End System Multicast.\nIn Proceedings of the ACM Sigmetrics 2000 International Conference on Measurement and Modeling of Computer Systems, June 2000.\n[19] Yang hua Chu, Sanjay G. Rao, Srinivasan Seshan, and Hui Zhang.\nEnabling Conferencing Applications on the Internet using an Overlay Multicast Architecture.\nIn Proceedings of ACM SIGCOMM, August 2001.\n[20] Manish Jain and Constantinos Dovrolis.\nEnd-to-end Available Bandwidth: Measurement Methodology, Dynamics, and Relation with TCP Throughput.\nIn Proceedings of SIGCOMM 2002, New York, August 19-23 2002.\n[21] John Jannotti, David K. Gifford, Kirk L. Johnson, M. Frans Kaashoek, and Jr..\nJames W. O``Toole.\nOvercast: Reliable Multicasting with an Overlay Network.\nIn Proceedings of Operating Systems Design and Implementation (OSDI), October 2000.\n[22] Kazaa media desktop.\nhttp://www.kazaa.com.\n[23] Min Sik Kim, Simon S. Lam, and Dong-Young Lee.\n296 Optimal Distribution Tree for Internet Streaming Media.\nTechnical Report TR-02-48, Department of Computer Sciences, University of Texas at Austin, September 2002.\n[24] Dejan Kosti\u00b4c, Adolfo Rodriguez, Jeannie Albrecht, Abhijeet Bhirud, and Amin Vahdat.\nUsing Random Subsets to Build Scalable Network Services.\nIn Proceedings of the USENIX Symposium on Internet Technologies and Systems, March 2003.\n[25] Michael Luby.\nLT Codes.\nIn In The 43rd Annual IEEE Symposium on Foundations of Computer Science, 2002.\n[26] Michael G. Luby, Michael Mitzenmacher, M. Amin Shokrollahi, Daniel A. Spielman, and Volker Stemann.\nPractical Loss-Resilient Codes.\nIn Proceedings of the 29th Annual ACM Symposium on the Theory of Computing (STOC ``97), pages 150-159, New York, May 1997.\nAssociation for Computing Machinery.\n[27] Jitedra Padhye, Victor Firoiu, Don Towsley, and Jim Krusoe.\nModeling TCP Throughput: A Simple Model and its Empirical Validation.\nIn ACM SIGCOMM ``98 conference on Applications, technologies, architectures, and protocols for computer communication, pages 303-314, Vancouver, CA, 1998.\n[28] Venkata N. Padmanabhan, Lili Qiu, and Helen J. Wang.\nServer-based Inference of Internet Link Lossiness.\nIn Proceedings of the IEEE Infocom, San Francisco, CA, USA, 2003.\n[29] Venkata N. Padmanabhan, Helen J. Wang, and Philip A. Chou.\nResilient Peer-to-Peer Streaming.\nIn Proceedings of the 11th ICNP, Atlanta, Georgia, USA, 2003.\n[30] Venkata N. Padmanabhan, Helen J. Wang, Philip A. Chou, and Kunwadee Sripanidkulchai.\nDistributing Streaming Media Content Using Cooperative Networking.\nIn ACM/IEEE NOSSDAV, 2002.\n[31] Larry Peterson, Tom Anderson, David Culler, and Timothy Roscoe.\nA Blueprint for Introducing Disruptive Technology into the Internet.\nIn Proceedings of ACM HotNets-I, October 2002.\n[32] R. C. Prim.\nShortest Connection Networks and Some Generalizations.\nIn Bell Systems Technical Journal, pages 1389-1401, November 1957.\n[33] Adolfo Rodriguez, Sooraj Bhat, Charles Killian, Dejan Kosti\u00b4c, and Amin Vahdat.\nMACEDON: Methodology for Automatically Creating, Evaluating, and Designing Overlay Networks.\nTechnical Report CS-2003-09, Duke University, July 2003.\n[34] Antony Rowstron, Anne-Marie Kermarrec, Miguel Castro, and Peter Druschel.\nSCRIBE: The Design of a Large-scale Event Notification Infrastructure.\nIn Third International Workshop on Networked Group Communication, November 2001.\n[35] Stefan Savage.\nSting: A TCP-based Network Measurement Tool.\nIn Proceedings of the 2nd USENIX Symposium on Internet Technologies and Systems (USITS-99), pages 71-80, Berkeley, CA, October 11-14 1999.\nUSENIX Association.\n[36] Alex C. Snoeren, Kenneth Conley, and David K. Gifford.\nMesh-Based Content Routing Using XML.\nIn Proceedings of the 18th ACM Symposium on Operating Systems Principles (SOSP ``01), October 2001.\n[37] Amin Vahdat, Ken Yocum, Kevin Walsh, Priya Mahadevan, Dejan Kosti\u00b4c, Jeff Chase, and David Becker.\nScalability and Accuracy in a Large-Scale Network Emulator.\nIn Proceedings of the 5th Symposium on Operating Systems Design and Implementation (OSDI), December 2002.\n297", "lvl-3": "Bullet : High Bandwidth Data Dissemination Using an Overlay Mesh\nABSTRACT\nIn recent years , overlay networks have become an effective alternative to IP multicast for efficient point to multipoint communication across the Internet .\nTypically , nodes self-organize with the goal of forming an efficient overlay tree , one that meets performance targets without placing undue burden on the underlying network .\nIn this paper , we target high-bandwidth data distribution from a single source to a large number of receivers .\nApplications include large-file transfers and real-time multimedia streaming .\nFor these applications , we argue that an overlay mesh , rather than a tree , can deliver fundamentally higher bandwidth and reliability relative to typical tree structures .\nThis paper presents Bullet , a scalable and distributed algorithm that enables nodes spread across the Internet to self-organize into a high bandwidth overlay mesh .\nWe construct Bullet around the insight that data should be distributed in a disjoint manner to strategic points in the network .\nIndividual Bullet receivers are then responsible for locating and retrieving the data from multiple points in parallel .\nKey contributions of this work include : i ) an algorithm that sends data to different points in the overlay such that any data object is equally likely to appear at any node , ii ) a scalable and decentralized algorithm that allows nodes to locate and recover missing data items , and iii ) a complete implementation and evaluation of Bullet running across the Internet and in a large-scale emulation environment reveals up to a factor two bandwidth improvements under a variety of circumstances .\nIn addition , we find that , relative to tree-based solutions , Bullet reduces the need to perform expensive bandwidth probing .\nIn a tree , it is critical that a node 's parent delivers a high rate of application data to each child .\nIn Bullet however , nodes simultaneously receive data from multiple sources in parallel , making it less important to locate any single source capable of sustaining a high transmission rate .\n1 .\nINTRODUCTION\nIn this paper , we consider the following general problem .\nGiven a sender and a large set of interested receivers spread across the Internet , how can we maximize the amount of bandwidth delivered to receivers ?\nOur problem domain includes software or video distribution and real-time multimedia streaming .\nTraditionally , native IP multicast has been the preferred method for delivering content to a set of receivers in a scalable fashion .\nHowever , a number of considerations , including scale , reliability , and congestion control , have limited the wide-scale deployment of IP multicast .\nEven if all these problems were to be addressed , IP multicast does not consider bandwidth when constructing its distribution tree .\nMore recently , overlays have emerged as a promising alternative to multicast for network-efficient point to multipoint data delivery .\nTypical overlay structures attempt to mimic the structure of multicast routing trees .\nIn network-layer multicast however , interior nodes consist of high speed routers with limited processing power and extensibility .\nOverlays , on the other hand , use programmable ( and hence extensible ) end hosts as interior nodes in the overlay tree , with these hosts acting as repeaters to multiple children down the tree .\nOverlays have shown tremendous promise for multicast-style applications .\nHowever , we argue that a tree structure has fundamental limitations both for high bandwidth multicast and for high reliability .\nOne difficulty with trees is that bandwidth is guaranteed to be monotonically decreasing moving down the tree .\nAny loss high up the tree will reduce the bandwidth available to receivers lower down the tree .\nA number of techniques have been proposed to recover from losses and hence improve the available bandwidth in an overlay tree [ 2 , 6 ] .\nHowever , fundamentally , the bandwidth available to any host is limited by the bandwidth available from that node 's single parent in the tree .\nThus , our work operates on the premise that the model for high-bandwidth multicast data dissemination should be re-examined .\nRather than sending identical copies of the same data stream to all nodes in a tree and designing a scalable mechanism for recovering from loss , we propose that participants in a multicast overlay cooperate to strategically\ntransmit disjoint data sets to various points in the network .\nHere , the sender splits data into sequential blocks .\nBlocks are further subdivided into individual objects which are in turn transmitted to different points in the network .\nNodes still receive a set of objects from their parents , but they are then responsible for locating peers that hold missing data objects .\nWe use a distributed algorithm that aims to make the availability of data items uniformly spread across all overlay participants .\nIn this way , we avoid the problem of locating the `` last object '' , which may only be available at a few nodes .\nOne hypothesis of this work is that , relative to a tree , this model will result in higher bandwidth -- leveraging the bandwidth from simultaneous parallel downloads from multiple sources rather than a single parent -- and higher reliability -- retrieving data from multiple peers reduces the potential damage from a single node failure .\nTo illustrate Bullet 's behavior , consider a simple three node overlay with a root R and two children A and B. R has 1 Mbps of available ( TCP-friendly ) bandwidth to each of A and B. However , there is also 1 Mbps of available bandwidth between A and B .\nIn this example , Bullet would transmit a disjoint set of data at 1 Mbps to each of A and B .\nA and B would then each independently discover the availability of disjoint data at the remote peer and begin streaming data to one another , effectively achieving a retrieval rate of 2 Mbps .\nOn the other hand , any overlay tree is restricted to delivering at most 1 Mbps even with a scalable technique for recovering lost data .\nAny solution for achieving the above model must maintain a number of properties .\nFirst , it must be TCP friendly [ 15 ] .\nNo flow should consume more than its fair share of the bottleneck bandwidth and each flow must respond to congestion signals ( losses ) by reducing its transmission rate .\nSecond , it must impose low control overhead .\nThere are many possible sources of such overhead , including probing for available bandwidth between nodes , locating appropriate nodes to `` peer '' with for data retrieval and redundantly receiving the same data objects from multiple sources .\nThird , the algorithm should be decentralized and scalable to thousands of participants .\nNo node should be required to learn or maintain global knowledge , for instance global group membership or the set of data objects currently available at all nodes .\nFinally , the approach must be robust to individual failures .\nFor example , the failure of a single node should result only in a temporary reduction in the bandwidth delivered to a small subset of participants ; no single failure should result in the complete loss of data for any significant fraction of nodes , as might be the case for a single node failure `` high up '' in a multicast overlay tree .\nIn this context , this paper presents the design and evaluation of Bullet , an algorithm for constructing an overlay mesh that attempts to maintain the above properties .\nBullet nodes begin by self-organizing into an overlay tree , which can be constructed by any of a number of existing techniques [ 1 , 18 , 21 , 24 , 34 ] .\nEach Bullet node , starting with the root of the underlying tree , then transmits a disjoint set of data to each of its children , with the goal of maintaining uniform representativeness of each data item across all participants .\nThe level of disjointness is determined by the bandwidth available to each of its children .\nBullet then employs a scalable and efficient algorithm to enable nodes to quickly locate multiple peers capable of transmitting missing data items to the node .\nThus , Bullet layers a high-bandwidth mesh on top of an arbitrary overlay tree .\nDepending on the type of data being transmitted , Bullet can optionally employ a variety of encoding schemes , for instance Erasure codes [ 7 , 26 , 25 ] or Multiple Description Coding ( MDC ) [ 17 ] , to efficiently disseminate data , adapt to variable bandwidth , and recover from losses .\nFinally , we use TFRC [ 15 ] to transfer data both down the overlay tree and among peers .\nThis ensures that the entire overlay behaves in a congestion-friendly manner , adjusting its transmission rate on a per-connection basis based on prevailing network conditions .\nOne important benefit of our approach is that the bandwidth delivered by the Bullet mesh is somewhat independent of the bandwidth available through the underlying overlay tree .\nOne significant limitation to building high bandwidth overlay trees is the overhead associated with the tree construction protocol .\nIn these trees , it is critical that each participant locates a parent via probing with a high level of available bandwidth because it receives data from only a single source ( its parent ) .\nThus , even once the tree is constructed , nodes must continue their probing to adapt to dynamically changing network conditions .\nWhile bandwidth probing is an active area of research [ 20 , 35 ] , accurate results generally require the transfer of a large amount of data to gain confidence in the results .\nOur approach with Bullet allows receivers to obtain high bandwidth in aggregate using individual transfers from peers spread across the system .\nThus , in Bullet , the bandwidth available from any individual peer is much less important than in any bandwidthoptimized tree .\nFurther , all the bandwidth that would normally be consumed probing for bandwidth can be reallocated to streaming data across the Bullet mesh .\nWe have completed a prototype of Bullet running on top of a number of overlay trees .\nOur evaluation of a 1000-node overlay running across a wide variety of emulated 20,000 node network topologies shows that Bullet can deliver up to twice the bandwidth of a bandwidth-optimized tree ( using an offline algorithm and global network topology information ) , all while remaining TCP friendly .\nWe also deployed our prototype across the PlanetLab [ 31 ] wide-area testbed .\nFor these live Internet runs , we find that Bullet can deliver comparable bandwidth performance improvements .\nIn both cases , the overhead of maintaining the Bullet mesh and locating the appropriate disjoint data is limited to 30 Kbps per node , acceptable for our target high-bandwidth , large-scale scenarios .\nThe remainder of this paper is organized as follows .\nSection 2 presents Bullet 's system components including RanSub , informed content delivery , and TFRC .\nSection 3 then details Bullet , an efficient data distribution system for bandwidth intensive applications .\nSection 4 evaluates Bullet 's performance for a variety of network topologies , and compares it to existing multicast techniques .\nSection 5 places our work in the context of related efforts and Section 6 presents our conclusions .\n2 .\nSYSTEM COMPONENTS\n2.1 Data Encoding\n2.2 RanSub\n2.3 Informed Content Delivery Techniques\n2.4 TCP Friendly Rate Control\n3 .\nBULLET\n3.1 Finding Overlay Peers\n3.2 Recovering Data From Peers\n3.3 Making Data Disjoint\n3.4 Improving the Bullet Mesh\n4 .\nEVALUATION\n4.1 Offline Bottleneck Bandwidth Tree\n4.2 Bullet vs. Streaming\n4.3 Creating Disjoint Data\n4.4 Epidemic Approaches\n4.5 Bullet on a Lossy Network\n4.6 Performance Under Failure\n4.7 PlanetLab\n5 .\nRELATED WORK\nSnoeren et al. [ 36 ] use an overlay mesh to achieve reliable and timely delivery of mission-critical data .\nIn this system , every node chooses n `` parents '' from which to receive duplicate packet streams .\nSince its foremost emphasis is reliability , the system does not attempt to improve the bandwidth delivered to the overlay participants by sending disjoint data at each level .\nFurther , during recovery from parent failure , it limits an overlay router 's choice of parents to nodes with a level number that is less than its own level number .\nThe power of `` perpendicular '' downloads is perhaps best illustrated by Kazaa [ 22 ] , the popular peer-to-peer file swapping network .\nKazaa nodes are organized into a scalable , hierarchical structure .\nIndividual users search for desired content in the structure and proceed to simultaneously download potentially disjoint pieces from nodes that already have it .\nSince Kazaa does not address the multicast communication model , a large fraction of users downloading the same file would consume more bandwidth than nodes organized into the Bullet overlay structure .\nKazaa does not use erasure coding ; therefore it may take considerable time to locate `` the last few bytes . ''\nBitTorrent [ 3 ] is another example of a file distribution system currently deployed on the Internet .\nIt utilizes trackers that direct downloaders to random subsets of machines that already have portions of the file .\nThe tracker poses a scalability limit , as it continuously updates the systemwide distribution of the file .\nLowering the tracker communication rate could hurt the overall system performance , as information might be out of date .\nFurther , BitTorrent does not employ any strategy to disseminate data to different regions of the network , potentially making it more difficult to recover data depending on client access patterns .\nSimilar to Bullet , BitTorrent incorporates the notion of `` choking '' at each node with the goal of identifying receivers that benefit the most by downloading from that particular source .\nFastReplica [ 11 ] addresses the problem of reliable and efficient file distribution in content distribution networks ( CDNs ) .\nIn the basic algorithm , nodes are organized into groups of fixed size ( n ) , with full group membership information at each node .\nTo distribute the file , a node splits it into n equal-sized portions , sends the portions to other group members , and instructs them to download the missing pieces in parallel from other group members .\nSince only a fixed portion of the file is transmitted along each of the overlay links , the impact of congestion is smaller than in the case of tree distribution .\nHowever , since it treats all paths equally , FastReplica does not take full advantage of highbandwidth overlay links in the system .\nSince it requires file store-and-forward logic at each level of the hierarchy necessary for scaling the system , it may not be applicable to high-bandwidth streaming .\nThere are numerous protocols that aim to add reliability to IP multicast .\nIn Scalable Reliable Multicast ( SRM ) [ 16 ] , nodes multicast retransmission requests for missed packets .\nTwo techniques attempt to improve the scalability of this approach : probabilistic choice of retransmission timeouts , and organization of receivers into hierarchical local recovery groups .\nHowever , it is difficult to find appropriate timer values and local scoping settings ( via the TTL field ) for a wide range of topologies , number of receivers , etc. even when adaptive techniques are used .\nOne recent study [ 2 ] shows that SRM may have significant overhead due to retransmission requests .\nBullet is closely related to efforts that use epidemic data propagation techniques to recover from losses in the nonreliable IP-multicast tree .\nIn pbcast [ 2 ] , a node has global group membership , and periodically chooses a random subset of peers to send a digest of its received packets .\nA node that receives the digest responds to the sender with the missing packets in a last-in , first-out fashion .\nLbpcast [ 14 ] addresses pbcast 's scalability issues ( associated with global knowledge ) by constructing , in a decentralized fashion , a partial group membership view at each node .\nThe average size of the views is engineered to allow a message to reach all participants with high probability .\nSince lbpcast does not require an underlying tree for data distribution and relies on the push-gossiping model , its network overhead can be quite high .\nCompared to the reliable multicast efforts , Bullet behaves favorably in terms of the network overhead because nodes do not `` blindly '' request retransmissions from their peers .\nInstead , Bullet uses the summary views it obtains through RanSub to guide its actions toward nodes with disjoint content .\nFurther , a Bullet node splits the retransmission load between all of its peers .\nWe note that pbcast nodes contain a mechanism to rate-limit retransmitted packets and to send different packets in response to the same digest .\nHowever , this does not guarantee that packets received in parallel from multiple peers will not be duplicates .\nMore importantly , the multicast recovery methods are limited by the bandwidth through the tree , while Bullet strives to provide more bandwidth to all receivers by making data deliberately disjoint throughout the tree .\nNarada [ 19 ] builds a delay-optimized mesh interconnecting all participating nodes and actively measures the available bandwidth on overlay links .\nIt then runs a standard routing protocol on top of the overlay mesh to construct forwarding trees using each node as a possible source .\nNarada nodes maintain global knowledge about all group participants , limiting system scalability to several tens of nodes .\nFurther , the bandwidth available through a Narada tree is still limited to the bandwidth available from each parent .\nOn the other hand , the fundamental goal of Bullet is to increase bandwidth through download of disjoint data from multiple peers .\nOvercast [ 21 ] is an example of a bandwidth-efficient overlay tree construction algorithm .\nIn this system , all nodes join at the root and migrate down to the point in the tree where they are still able to maintain some minimum level of bandwidth .\nBullet is expected to be more resilient to node departures than any tree , including Overcast .\nInstead of a node waiting to get the data it missed from a new parent , a node can start getting data from its perpendicular peers .\nThis transition is seamless , as the node that is disconnected from its parent will start demanding more missing packets from its peers during the standard round of refreshing its filters .\nOvercast convergence time is limited by probes to immediate siblings and ancestors .\nBullet is able to provide approximately a target bandwidth without having a fully converged tree .\nIn parallel to our own work , SplitStream [ 9 ] also has the goal of achieving high bandwidth data dissemination .\nIt operates by splitting the multicast stream into k stripes , transmitting each stripe along a separate multicast tree built using Scribe [ 34 ] .\nThe key design goal of the tree construction mechanism is to have each node be an intermediate node in at most one tree ( while observing both inbound and outbound node bandwidth constraints ) , thereby reducing the impact of a single node 's sudden departure on the rest of the system .\nThe join procedure can potentially sacrifice the interior-node-disjointness achieved by Scribe .\nPerhaps more importantly , SplitStream assumes that there is enough available bandwidth to carry each stripe on every link of the tree , including the links between the data source and the roots of individual stripe trees independently chosen by Scribe .\nTo some extent , Bullet and SplitStream are complementary .\nFor instance , Bullet could run on each of the stripes to maximize the bandwidth delivered to each node along each stripe .\nCoopNet [ 29 ] considers live content streaming in a peerto-peer environment , subject to high node churn .\nConsequently , the system favors resilience over network efficiency .\nIt uses a centralized approach for constructing either random or deterministic node-disjoint ( similar to SplitStream ) trees , and it includes an MDC [ 17 ] adaptation framework based on scalable receiver feedback that attempts to maximize the signal-to-noise ratio perceived by receivers .\nIn the case of on-demand streaming , CoopNet [ 30 ] addresses\nthe flash-crowd problem at the central server by redirecting incoming clients to a fixed number of nodes that have previously retrieved portions of the same content .\nCompared to CoopNet , Bullet provides nodes with a uniformly random subset of the system-wide distribution of the file .\n6 .\nCONCLUSIONS\nTypically , high bandwidth overlay data streaming takes place over a distribution tree .\nIn this paper , we argue that , in fact , an overlay mesh is able to deliver fundamentally higher bandwidth .\nOf course , a number of difficult challenges must be overcome to ensure that nodes in the mesh do not repeatedly receive the same data from peers .\nThis paper presents the design and implementation of Bullet , a scalable and efficient overlay construction algorithm that overcomes this challenge to deliver significant bandwidth improvements relative to traditional tree structures .\nSpecifically , this paper makes the following contributions : 9 We present the design and analysis of Bullet , an overlay construction algorithm that creates a mesh over any distribution tree and allows overlay participants to achieve a higher bandwidth throughput than traditional data streaming .\nAs a related benefit , we eliminate the overhead required to probe for available bandwidth in traditional distributed tree construction techniques .\n9 We provide a technique for recovering missing data from peers in a scalable and efficient manner .\nRanSub periodically disseminates summaries of data sets received by a changing , uniformly random subset of global participants .\n9 We propose a mechanism for making data disjoint and then distributing it in a uniform way that makes the probability of finding a peer containing missing data equal for all nodes .\n9 A large-scale evaluation of 1000 overlay participants running in an emulated 20,000 node network topology , as well as experimentation on top of the PlanetLab Internet testbed , shows that Bullet running over a random tree can achieve twice the throughput of streaming over a traditional bandwidth tree .", "lvl-4": "Bullet : High Bandwidth Data Dissemination Using an Overlay Mesh\nABSTRACT\nIn recent years , overlay networks have become an effective alternative to IP multicast for efficient point to multipoint communication across the Internet .\nTypically , nodes self-organize with the goal of forming an efficient overlay tree , one that meets performance targets without placing undue burden on the underlying network .\nIn this paper , we target high-bandwidth data distribution from a single source to a large number of receivers .\nApplications include large-file transfers and real-time multimedia streaming .\nFor these applications , we argue that an overlay mesh , rather than a tree , can deliver fundamentally higher bandwidth and reliability relative to typical tree structures .\nThis paper presents Bullet , a scalable and distributed algorithm that enables nodes spread across the Internet to self-organize into a high bandwidth overlay mesh .\nWe construct Bullet around the insight that data should be distributed in a disjoint manner to strategic points in the network .\nIndividual Bullet receivers are then responsible for locating and retrieving the data from multiple points in parallel .\nKey contributions of this work include : i ) an algorithm that sends data to different points in the overlay such that any data object is equally likely to appear at any node , ii ) a scalable and decentralized algorithm that allows nodes to locate and recover missing data items , and iii ) a complete implementation and evaluation of Bullet running across the Internet and in a large-scale emulation environment reveals up to a factor two bandwidth improvements under a variety of circumstances .\nIn addition , we find that , relative to tree-based solutions , Bullet reduces the need to perform expensive bandwidth probing .\nIn a tree , it is critical that a node 's parent delivers a high rate of application data to each child .\nIn Bullet however , nodes simultaneously receive data from multiple sources in parallel , making it less important to locate any single source capable of sustaining a high transmission rate .\n1 .\nINTRODUCTION\nIn this paper , we consider the following general problem .\nGiven a sender and a large set of interested receivers spread across the Internet , how can we maximize the amount of bandwidth delivered to receivers ?\nOur problem domain includes software or video distribution and real-time multimedia streaming .\nTraditionally , native IP multicast has been the preferred method for delivering content to a set of receivers in a scalable fashion .\nHowever , a number of considerations , including scale , reliability , and congestion control , have limited the wide-scale deployment of IP multicast .\nEven if all these problems were to be addressed , IP multicast does not consider bandwidth when constructing its distribution tree .\nMore recently , overlays have emerged as a promising alternative to multicast for network-efficient point to multipoint data delivery .\nTypical overlay structures attempt to mimic the structure of multicast routing trees .\nIn network-layer multicast however , interior nodes consist of high speed routers with limited processing power and extensibility .\nOverlays , on the other hand , use programmable ( and hence extensible ) end hosts as interior nodes in the overlay tree , with these hosts acting as repeaters to multiple children down the tree .\nOverlays have shown tremendous promise for multicast-style applications .\nHowever , we argue that a tree structure has fundamental limitations both for high bandwidth multicast and for high reliability .\nOne difficulty with trees is that bandwidth is guaranteed to be monotonically decreasing moving down the tree .\nAny loss high up the tree will reduce the bandwidth available to receivers lower down the tree .\nA number of techniques have been proposed to recover from losses and hence improve the available bandwidth in an overlay tree [ 2 , 6 ] .\nHowever , fundamentally , the bandwidth available to any host is limited by the bandwidth available from that node 's single parent in the tree .\nThus , our work operates on the premise that the model for high-bandwidth multicast data dissemination should be re-examined .\nRather than sending identical copies of the same data stream to all nodes in a tree and designing a scalable mechanism for recovering from loss , we propose that participants in a multicast overlay cooperate to strategically\ntransmit disjoint data sets to various points in the network .\nHere , the sender splits data into sequential blocks .\nBlocks are further subdivided into individual objects which are in turn transmitted to different points in the network .\nNodes still receive a set of objects from their parents , but they are then responsible for locating peers that hold missing data objects .\nWe use a distributed algorithm that aims to make the availability of data items uniformly spread across all overlay participants .\nIn this way , we avoid the problem of locating the `` last object '' , which may only be available at a few nodes .\nTo illustrate Bullet 's behavior , consider a simple three node overlay with a root R and two children A and B. R has 1 Mbps of available ( TCP-friendly ) bandwidth to each of A and B. However , there is also 1 Mbps of available bandwidth between A and B .\nIn this example , Bullet would transmit a disjoint set of data at 1 Mbps to each of A and B .\nA and B would then each independently discover the availability of disjoint data at the remote peer and begin streaming data to one another , effectively achieving a retrieval rate of 2 Mbps .\nOn the other hand , any overlay tree is restricted to delivering at most 1 Mbps even with a scalable technique for recovering lost data .\nAny solution for achieving the above model must maintain a number of properties .\nFirst , it must be TCP friendly [ 15 ] .\nSecond , it must impose low control overhead .\nThere are many possible sources of such overhead , including probing for available bandwidth between nodes , locating appropriate nodes to `` peer '' with for data retrieval and redundantly receiving the same data objects from multiple sources .\nThird , the algorithm should be decentralized and scalable to thousands of participants .\nNo node should be required to learn or maintain global knowledge , for instance global group membership or the set of data objects currently available at all nodes .\nFinally , the approach must be robust to individual failures .\nFor example , the failure of a single node should result only in a temporary reduction in the bandwidth delivered to a small subset of participants ; no single failure should result in the complete loss of data for any significant fraction of nodes , as might be the case for a single node failure `` high up '' in a multicast overlay tree .\nIn this context , this paper presents the design and evaluation of Bullet , an algorithm for constructing an overlay mesh that attempts to maintain the above properties .\nBullet nodes begin by self-organizing into an overlay tree , which can be constructed by any of a number of existing techniques [ 1 , 18 , 21 , 24 , 34 ] .\nEach Bullet node , starting with the root of the underlying tree , then transmits a disjoint set of data to each of its children , with the goal of maintaining uniform representativeness of each data item across all participants .\nThe level of disjointness is determined by the bandwidth available to each of its children .\nBullet then employs a scalable and efficient algorithm to enable nodes to quickly locate multiple peers capable of transmitting missing data items to the node .\nThus , Bullet layers a high-bandwidth mesh on top of an arbitrary overlay tree .\nFinally , we use TFRC [ 15 ] to transfer data both down the overlay tree and among peers .\nOne important benefit of our approach is that the bandwidth delivered by the Bullet mesh is somewhat independent of the bandwidth available through the underlying overlay tree .\nOne significant limitation to building high bandwidth overlay trees is the overhead associated with the tree construction protocol .\nIn these trees , it is critical that each participant locates a parent via probing with a high level of available bandwidth because it receives data from only a single source ( its parent ) .\nThus , even once the tree is constructed , nodes must continue their probing to adapt to dynamically changing network conditions .\nWhile bandwidth probing is an active area of research [ 20 , 35 ] , accurate results generally require the transfer of a large amount of data to gain confidence in the results .\nOur approach with Bullet allows receivers to obtain high bandwidth in aggregate using individual transfers from peers spread across the system .\nThus , in Bullet , the bandwidth available from any individual peer is much less important than in any bandwidthoptimized tree .\nFurther , all the bandwidth that would normally be consumed probing for bandwidth can be reallocated to streaming data across the Bullet mesh .\nWe have completed a prototype of Bullet running on top of a number of overlay trees .\nOur evaluation of a 1000-node overlay running across a wide variety of emulated 20,000 node network topologies shows that Bullet can deliver up to twice the bandwidth of a bandwidth-optimized tree ( using an offline algorithm and global network topology information ) , all while remaining TCP friendly .\nFor these live Internet runs , we find that Bullet can deliver comparable bandwidth performance improvements .\nIn both cases , the overhead of maintaining the Bullet mesh and locating the appropriate disjoint data is limited to 30 Kbps per node , acceptable for our target high-bandwidth , large-scale scenarios .\nThe remainder of this paper is organized as follows .\nSection 2 presents Bullet 's system components including RanSub , informed content delivery , and TFRC .\nSection 3 then details Bullet , an efficient data distribution system for bandwidth intensive applications .\nSection 4 evaluates Bullet 's performance for a variety of network topologies , and compares it to existing multicast techniques .\nSection 5 places our work in the context of related efforts and Section 6 presents our conclusions .\n5 .\nRELATED WORK\nSnoeren et al. [ 36 ] use an overlay mesh to achieve reliable and timely delivery of mission-critical data .\nIn this system , every node chooses n `` parents '' from which to receive duplicate packet streams .\nSince its foremost emphasis is reliability , the system does not attempt to improve the bandwidth delivered to the overlay participants by sending disjoint data at each level .\nFurther , during recovery from parent failure , it limits an overlay router 's choice of parents to nodes with a level number that is less than its own level number .\nKazaa nodes are organized into a scalable , hierarchical structure .\nIndividual users search for desired content in the structure and proceed to simultaneously download potentially disjoint pieces from nodes that already have it .\nSince Kazaa does not address the multicast communication model , a large fraction of users downloading the same file would consume more bandwidth than nodes organized into the Bullet overlay structure .\nBitTorrent [ 3 ] is another example of a file distribution system currently deployed on the Internet .\nThe tracker poses a scalability limit , as it continuously updates the systemwide distribution of the file .\nSimilar to Bullet , BitTorrent incorporates the notion of `` choking '' at each node with the goal of identifying receivers that benefit the most by downloading from that particular source .\nFastReplica [ 11 ] addresses the problem of reliable and efficient file distribution in content distribution networks ( CDNs ) .\nIn the basic algorithm , nodes are organized into groups of fixed size ( n ) , with full group membership information at each node .\nTo distribute the file , a node splits it into n equal-sized portions , sends the portions to other group members , and instructs them to download the missing pieces in parallel from other group members .\nSince only a fixed portion of the file is transmitted along each of the overlay links , the impact of congestion is smaller than in the case of tree distribution .\nHowever , since it treats all paths equally , FastReplica does not take full advantage of highbandwidth overlay links in the system .\nThere are numerous protocols that aim to add reliability to IP multicast .\nIn Scalable Reliable Multicast ( SRM ) [ 16 ] , nodes multicast retransmission requests for missed packets .\nBullet is closely related to efforts that use epidemic data propagation techniques to recover from losses in the nonreliable IP-multicast tree .\nIn pbcast [ 2 ] , a node has global group membership , and periodically chooses a random subset of peers to send a digest of its received packets .\nA node that receives the digest responds to the sender with the missing packets in a last-in , first-out fashion .\nSince lbpcast does not require an underlying tree for data distribution and relies on the push-gossiping model , its network overhead can be quite high .\nCompared to the reliable multicast efforts , Bullet behaves favorably in terms of the network overhead because nodes do not `` blindly '' request retransmissions from their peers .\nInstead , Bullet uses the summary views it obtains through RanSub to guide its actions toward nodes with disjoint content .\nFurther , a Bullet node splits the retransmission load between all of its peers .\nWe note that pbcast nodes contain a mechanism to rate-limit retransmitted packets and to send different packets in response to the same digest .\nHowever , this does not guarantee that packets received in parallel from multiple peers will not be duplicates .\nMore importantly , the multicast recovery methods are limited by the bandwidth through the tree , while Bullet strives to provide more bandwidth to all receivers by making data deliberately disjoint throughout the tree .\nNarada [ 19 ] builds a delay-optimized mesh interconnecting all participating nodes and actively measures the available bandwidth on overlay links .\nIt then runs a standard routing protocol on top of the overlay mesh to construct forwarding trees using each node as a possible source .\nNarada nodes maintain global knowledge about all group participants , limiting system scalability to several tens of nodes .\nFurther , the bandwidth available through a Narada tree is still limited to the bandwidth available from each parent .\nOn the other hand , the fundamental goal of Bullet is to increase bandwidth through download of disjoint data from multiple peers .\nOvercast [ 21 ] is an example of a bandwidth-efficient overlay tree construction algorithm .\nIn this system , all nodes join at the root and migrate down to the point in the tree where they are still able to maintain some minimum level of bandwidth .\nBullet is expected to be more resilient to node departures than any tree , including Overcast .\nInstead of a node waiting to get the data it missed from a new parent , a node can start getting data from its perpendicular peers .\nOvercast convergence time is limited by probes to immediate siblings and ancestors .\nBullet is able to provide approximately a target bandwidth without having a fully converged tree .\nIn parallel to our own work , SplitStream [ 9 ] also has the goal of achieving high bandwidth data dissemination .\nIt operates by splitting the multicast stream into k stripes , transmitting each stripe along a separate multicast tree built using Scribe [ 34 ] .\nPerhaps more importantly , SplitStream assumes that there is enough available bandwidth to carry each stripe on every link of the tree , including the links between the data source and the roots of individual stripe trees independently chosen by Scribe .\nTo some extent , Bullet and SplitStream are complementary .\nFor instance , Bullet could run on each of the stripes to maximize the bandwidth delivered to each node along each stripe .\nCoopNet [ 29 ] considers live content streaming in a peerto-peer environment , subject to high node churn .\nConsequently , the system favors resilience over network efficiency .\nIn the case of on-demand streaming , CoopNet [ 30 ] addresses\nthe flash-crowd problem at the central server by redirecting incoming clients to a fixed number of nodes that have previously retrieved portions of the same content .\nCompared to CoopNet , Bullet provides nodes with a uniformly random subset of the system-wide distribution of the file .\n6 .\nCONCLUSIONS\nTypically , high bandwidth overlay data streaming takes place over a distribution tree .\nIn this paper , we argue that , in fact , an overlay mesh is able to deliver fundamentally higher bandwidth .\nOf course , a number of difficult challenges must be overcome to ensure that nodes in the mesh do not repeatedly receive the same data from peers .\nThis paper presents the design and implementation of Bullet , a scalable and efficient overlay construction algorithm that overcomes this challenge to deliver significant bandwidth improvements relative to traditional tree structures .\nSpecifically , this paper makes the following contributions : 9 We present the design and analysis of Bullet , an overlay construction algorithm that creates a mesh over any distribution tree and allows overlay participants to achieve a higher bandwidth throughput than traditional data streaming .\nAs a related benefit , we eliminate the overhead required to probe for available bandwidth in traditional distributed tree construction techniques .\n9 We provide a technique for recovering missing data from peers in a scalable and efficient manner .\nRanSub periodically disseminates summaries of data sets received by a changing , uniformly random subset of global participants .\n9 We propose a mechanism for making data disjoint and then distributing it in a uniform way that makes the probability of finding a peer containing missing data equal for all nodes .\n9 A large-scale evaluation of 1000 overlay participants running in an emulated 20,000 node network topology , as well as experimentation on top of the PlanetLab Internet testbed , shows that Bullet running over a random tree can achieve twice the throughput of streaming over a traditional bandwidth tree .", "lvl-2": "Bullet : High Bandwidth Data Dissemination Using an Overlay Mesh\nABSTRACT\nIn recent years , overlay networks have become an effective alternative to IP multicast for efficient point to multipoint communication across the Internet .\nTypically , nodes self-organize with the goal of forming an efficient overlay tree , one that meets performance targets without placing undue burden on the underlying network .\nIn this paper , we target high-bandwidth data distribution from a single source to a large number of receivers .\nApplications include large-file transfers and real-time multimedia streaming .\nFor these applications , we argue that an overlay mesh , rather than a tree , can deliver fundamentally higher bandwidth and reliability relative to typical tree structures .\nThis paper presents Bullet , a scalable and distributed algorithm that enables nodes spread across the Internet to self-organize into a high bandwidth overlay mesh .\nWe construct Bullet around the insight that data should be distributed in a disjoint manner to strategic points in the network .\nIndividual Bullet receivers are then responsible for locating and retrieving the data from multiple points in parallel .\nKey contributions of this work include : i ) an algorithm that sends data to different points in the overlay such that any data object is equally likely to appear at any node , ii ) a scalable and decentralized algorithm that allows nodes to locate and recover missing data items , and iii ) a complete implementation and evaluation of Bullet running across the Internet and in a large-scale emulation environment reveals up to a factor two bandwidth improvements under a variety of circumstances .\nIn addition , we find that , relative to tree-based solutions , Bullet reduces the need to perform expensive bandwidth probing .\nIn a tree , it is critical that a node 's parent delivers a high rate of application data to each child .\nIn Bullet however , nodes simultaneously receive data from multiple sources in parallel , making it less important to locate any single source capable of sustaining a high transmission rate .\n1 .\nINTRODUCTION\nIn this paper , we consider the following general problem .\nGiven a sender and a large set of interested receivers spread across the Internet , how can we maximize the amount of bandwidth delivered to receivers ?\nOur problem domain includes software or video distribution and real-time multimedia streaming .\nTraditionally , native IP multicast has been the preferred method for delivering content to a set of receivers in a scalable fashion .\nHowever , a number of considerations , including scale , reliability , and congestion control , have limited the wide-scale deployment of IP multicast .\nEven if all these problems were to be addressed , IP multicast does not consider bandwidth when constructing its distribution tree .\nMore recently , overlays have emerged as a promising alternative to multicast for network-efficient point to multipoint data delivery .\nTypical overlay structures attempt to mimic the structure of multicast routing trees .\nIn network-layer multicast however , interior nodes consist of high speed routers with limited processing power and extensibility .\nOverlays , on the other hand , use programmable ( and hence extensible ) end hosts as interior nodes in the overlay tree , with these hosts acting as repeaters to multiple children down the tree .\nOverlays have shown tremendous promise for multicast-style applications .\nHowever , we argue that a tree structure has fundamental limitations both for high bandwidth multicast and for high reliability .\nOne difficulty with trees is that bandwidth is guaranteed to be monotonically decreasing moving down the tree .\nAny loss high up the tree will reduce the bandwidth available to receivers lower down the tree .\nA number of techniques have been proposed to recover from losses and hence improve the available bandwidth in an overlay tree [ 2 , 6 ] .\nHowever , fundamentally , the bandwidth available to any host is limited by the bandwidth available from that node 's single parent in the tree .\nThus , our work operates on the premise that the model for high-bandwidth multicast data dissemination should be re-examined .\nRather than sending identical copies of the same data stream to all nodes in a tree and designing a scalable mechanism for recovering from loss , we propose that participants in a multicast overlay cooperate to strategically\ntransmit disjoint data sets to various points in the network .\nHere , the sender splits data into sequential blocks .\nBlocks are further subdivided into individual objects which are in turn transmitted to different points in the network .\nNodes still receive a set of objects from their parents , but they are then responsible for locating peers that hold missing data objects .\nWe use a distributed algorithm that aims to make the availability of data items uniformly spread across all overlay participants .\nIn this way , we avoid the problem of locating the `` last object '' , which may only be available at a few nodes .\nOne hypothesis of this work is that , relative to a tree , this model will result in higher bandwidth -- leveraging the bandwidth from simultaneous parallel downloads from multiple sources rather than a single parent -- and higher reliability -- retrieving data from multiple peers reduces the potential damage from a single node failure .\nTo illustrate Bullet 's behavior , consider a simple three node overlay with a root R and two children A and B. R has 1 Mbps of available ( TCP-friendly ) bandwidth to each of A and B. However , there is also 1 Mbps of available bandwidth between A and B .\nIn this example , Bullet would transmit a disjoint set of data at 1 Mbps to each of A and B .\nA and B would then each independently discover the availability of disjoint data at the remote peer and begin streaming data to one another , effectively achieving a retrieval rate of 2 Mbps .\nOn the other hand , any overlay tree is restricted to delivering at most 1 Mbps even with a scalable technique for recovering lost data .\nAny solution for achieving the above model must maintain a number of properties .\nFirst , it must be TCP friendly [ 15 ] .\nNo flow should consume more than its fair share of the bottleneck bandwidth and each flow must respond to congestion signals ( losses ) by reducing its transmission rate .\nSecond , it must impose low control overhead .\nThere are many possible sources of such overhead , including probing for available bandwidth between nodes , locating appropriate nodes to `` peer '' with for data retrieval and redundantly receiving the same data objects from multiple sources .\nThird , the algorithm should be decentralized and scalable to thousands of participants .\nNo node should be required to learn or maintain global knowledge , for instance global group membership or the set of data objects currently available at all nodes .\nFinally , the approach must be robust to individual failures .\nFor example , the failure of a single node should result only in a temporary reduction in the bandwidth delivered to a small subset of participants ; no single failure should result in the complete loss of data for any significant fraction of nodes , as might be the case for a single node failure `` high up '' in a multicast overlay tree .\nIn this context , this paper presents the design and evaluation of Bullet , an algorithm for constructing an overlay mesh that attempts to maintain the above properties .\nBullet nodes begin by self-organizing into an overlay tree , which can be constructed by any of a number of existing techniques [ 1 , 18 , 21 , 24 , 34 ] .\nEach Bullet node , starting with the root of the underlying tree , then transmits a disjoint set of data to each of its children , with the goal of maintaining uniform representativeness of each data item across all participants .\nThe level of disjointness is determined by the bandwidth available to each of its children .\nBullet then employs a scalable and efficient algorithm to enable nodes to quickly locate multiple peers capable of transmitting missing data items to the node .\nThus , Bullet layers a high-bandwidth mesh on top of an arbitrary overlay tree .\nDepending on the type of data being transmitted , Bullet can optionally employ a variety of encoding schemes , for instance Erasure codes [ 7 , 26 , 25 ] or Multiple Description Coding ( MDC ) [ 17 ] , to efficiently disseminate data , adapt to variable bandwidth , and recover from losses .\nFinally , we use TFRC [ 15 ] to transfer data both down the overlay tree and among peers .\nThis ensures that the entire overlay behaves in a congestion-friendly manner , adjusting its transmission rate on a per-connection basis based on prevailing network conditions .\nOne important benefit of our approach is that the bandwidth delivered by the Bullet mesh is somewhat independent of the bandwidth available through the underlying overlay tree .\nOne significant limitation to building high bandwidth overlay trees is the overhead associated with the tree construction protocol .\nIn these trees , it is critical that each participant locates a parent via probing with a high level of available bandwidth because it receives data from only a single source ( its parent ) .\nThus , even once the tree is constructed , nodes must continue their probing to adapt to dynamically changing network conditions .\nWhile bandwidth probing is an active area of research [ 20 , 35 ] , accurate results generally require the transfer of a large amount of data to gain confidence in the results .\nOur approach with Bullet allows receivers to obtain high bandwidth in aggregate using individual transfers from peers spread across the system .\nThus , in Bullet , the bandwidth available from any individual peer is much less important than in any bandwidthoptimized tree .\nFurther , all the bandwidth that would normally be consumed probing for bandwidth can be reallocated to streaming data across the Bullet mesh .\nWe have completed a prototype of Bullet running on top of a number of overlay trees .\nOur evaluation of a 1000-node overlay running across a wide variety of emulated 20,000 node network topologies shows that Bullet can deliver up to twice the bandwidth of a bandwidth-optimized tree ( using an offline algorithm and global network topology information ) , all while remaining TCP friendly .\nWe also deployed our prototype across the PlanetLab [ 31 ] wide-area testbed .\nFor these live Internet runs , we find that Bullet can deliver comparable bandwidth performance improvements .\nIn both cases , the overhead of maintaining the Bullet mesh and locating the appropriate disjoint data is limited to 30 Kbps per node , acceptable for our target high-bandwidth , large-scale scenarios .\nThe remainder of this paper is organized as follows .\nSection 2 presents Bullet 's system components including RanSub , informed content delivery , and TFRC .\nSection 3 then details Bullet , an efficient data distribution system for bandwidth intensive applications .\nSection 4 evaluates Bullet 's performance for a variety of network topologies , and compares it to existing multicast techniques .\nSection 5 places our work in the context of related efforts and Section 6 presents our conclusions .\n2 .\nSYSTEM COMPONENTS\nOur approach to high bandwidth data dissemination centers around the techniques depicted in Figure 1 .\nFirst , we split the target data stream into blocks which are further subdivided into individual ( typically packet-sized ) objects .\nDepending on the requirements of the target applications , objects may be encoded [ 17 , 26 ] to make data recovery more efficient .\nNext , we purposefully disseminate disjoint objects\nFigure 1 : High-level view of Bullet 's operation .\nto different clients at a rate determined by the available bandwidth to each client .\nWe use the equation-based TFRC protocol to communicate among all nodes in the overlay in a congestion responsive and TCP friendly manner .\nGiven the above techniques , data is spread across the overlay tree at a rate commensurate with the available bandwidth in the overlay tree .\nOur overall goal however is to deliver more bandwidth than would otherwise be available through any tree .\nThus , at this point , nodes require a scalable technique for locating and retrieving disjoint data from their peers .\nIn essence , these perpendicular links across the overlay form a mesh to augment the bandwidth available through the tree .\nIn Figure 1 , node D only has sufficient bandwidth to receive 3 objects per time unit from its parent .\nHowever , it is able to locate two peers , C and E , who are able to transmit `` missing '' data objects , in this example increasing delivered bandwidth from 3 objects per time unit to 6 data objects per time unit .\nLocating appropriate remote peers can not require global state or global communication .\nThus , we propose the periodic dissemination of changing , uniformly random subsets of global state to each overlay node once per configurable time period .\nThis random subset contains summary tickets of the objects available at a subset of the nodes in the system .\nEach node uses this information to request data objects from remote nodes that have significant divergence in object membership .\nIt then attempts to establish a number of these peering relationships with the goals of minimizing overlap in the objects received from each peer and maximizing the total useful bandwidth delivered to it .\nIn the remainder of this section , we provide brief background on each of the techniques that we employ as fundamental building blocks for our work .\nSection 3 then presents the details of the entire Bullet architecture .\n2.1 Data Encoding\nDepending on the type of data being distributed through the system , a number of data encoding schemes can improve system efficiency .\nFor instance , if multimedia data is being distributed to a set of heterogeneous receivers with variable bandwidth , MDC [ 17 ] allows receivers obtaining different subsets of the data to still maintain a usable multimedia stream .\nFor dissemination of a large file among a set of receivers , Erasure codes enable receivers not to focus on retrieving every transmitted data packet .\nRather , after obtaining a threshold minimum number of packets , receivers are able to decode the original data stream .\nOf course , Bullet is amenable to a variety of other encoding schemes or even the `` null '' encoding scheme , where the original data stream is transmitted best-effort through the system .\nIn this paper , we focus on the benefits of a special class of erasure-correcting codes used to implement the `` digital fountain '' [ 7 ] approach .\nRedundant Tornado [ 26 ] codes are created by performing XOR operations on a selected number of original data packets , and then transmitted along with the original data packets .\nTornado codes require any ( 1 + e ) k correctly received packets to reconstruct the original k data packets , with the typically low reception overhead ( e ) of 0.03 \u2212 0.05 .\nIn return , they provide significantly faster encoding and decoding times .\nAdditionally , the decoding algorithm can run in real-time , and the reconstruction process can start as soon as sufficiently many packets have arrived .\nTornado codes require a predetermined stretch factor ( n/k , where n is the total number of encoded packets ) , and their encoding time is proportional to n. LT codes [ 25 ] remove these two limitations , while maintaining a low reception overhead of 0.05 .\n2.2 RanSub\nTo address the challenge of locating disjoint content within the system , we use RanSub [ 24 ] , a scalable approach to distributing changing , uniform random subsets of global state to all nodes of an overlay tree .\nRanSub assumes the presence of some scalable mechanism for efficiently building and maintaining the underlying tree .\nA number of such techniques are described in [ 1 , 18 , 21 , 24 , 34 ] .\nRanSub distributes random subsets of participating nodes throughout the tree using collect and distribute messages .\nCollect messages start at the leaves and propagate up the tree , leaving state at each node along the path to the root .\nDistribute messages start at the root and travel down the tree , using the information left at the nodes during the previous collect round to distribute uniformly random subsets to all participants .\nUsing the collect and distribute messages , RanSub distributes a random subset of participants to each node once per epoch .\nThe lower bound on the length of an epoch is determined by the time it takes to propagate data up then back down the tree , or roughly twice the height of the tree .\nFor appropriately constructed trees , the minimum epoch length will grow with the logarithm of the number of participants , though this is not required for correctness .\nAs part of the distribute message , each participant sends a uniformly random subset of remote nodes , called a distribute set , down to its children .\nThe contents of the distribute set are constructed using the collect set gathered during the previous collect phase .\nDuring this phase , each participant sends a collect set consisting of a random subset of its descendant nodes up the tree to the root along with an estimate of its total number of descendants .\nAfter the root receives all collect sets and the collect phase completes , the distribute phase begins again in a new epoch .\nOne of the key features of RanSub is the Compact operation .\nThis is the process used to ensure that membership in a collect set propagated by a node to its parent is both random and uniformly representative of all members of the sub-tree rooted at that node .\nCompact takes multiple fixedsize subsets and the total population represented by each subset as input , and generates a new fixed-size subset .\nThe\nFigure 2 : This example shows the two phases of the RanSub protocol that occur in one epoch .\nThe collect phase is shown on the left , where the collect sets are traveling up the overlay to the root .\nThe distribute phase on the right shows the distribute sets traveling down the overlay to the leaf nodes .\nmembers of the resulting set are uniformly random representatives of the input subset members .\nRanSub offers several ways of constructing distribute sets .\nFor our system , we choose the RanSub-nondescendants option .\nIn this case , each node receives a random subset consisting of all nodes excluding its descendants .\nThis is appropriate for our download structure where descendants are expected to have less content than an ancestor node in most cases .\nA parent creates RanSub-nondescendants distribute sets for each child by compacting collect sets from that child 's siblings and its own distribute set .\nThe result is a distribute set that contains a random subset representing all nodes in the tree except for those rooted at that particular child .\nWe depict an example of RanSub 's collect-distribute process in Figure 2 .\nIn the figure , AS stands for node A 's state .\n2.3 Informed Content Delivery Techniques\nAssuming we can enable a node to locate a peer with disjoint content using RanSub , we need a method for reconciling the differences in the data .\nAdditionally , we require a bandwidth-efficient method with low computational overhead .\nWe chose to implement the approximate reconciliation techniques proposed in [ 6 ] for these tasks in Bullet .\nTo describe the content , nodes maintain working sets .\nThe working set contains sequence numbers of packets that have been successfully received by each node over some period of time .\nWe need the ability to quickly discern the resemblance between working sets from two nodes and decide whether a fine-grained reconciliation is beneficial .\nSummary tickets , or min-wise sketches [ 5 ] , serve this purpose .\nThe main idea is to create a summary ticket that is an unbiased random sample of the working set .\nA summary ticket is a small fixed-size array .\nEach entry in this array is maintained by a specific permutation function .\nThe goal is to have each entry populated by the element with the smallest permuted value .\nTo insert a new element into the summary ticket , we apply the permutation functions in order and update array values as appropriate .\nThe permutation function can be thought of as a specialized hash function .\nThe choice of permutation functions is important as the quality of the summary ticket depends directly on the randomness properties of the permutation functions .\nSince we require them to have a low computational overhead , we use simple permutation functions , such as Pj ( x ) = ( ax + b ) mod | U | , where U is the universe size ( dependant on the data encoding scheme ) .\nTo compute the resemblance between two working sets , we compute the number of summary ticket entries that have the same value , and divide it by the total number of entries in the summary tickets .\nFigure 3 shows the way the permutation functions are used to populate the summary ticket .\nFigure 3 : Example showing a sample summary ticket being constructed from the working set .\nTo perform approximate fine-grain reconciliation , a peer A sends its digest to peer B and expects to receive packets not described in the digest .\nFor this purpose , we use a Bloom filter [ 4 ] , a bit array of size m with k independent associated hash functions .\nAn element s from the set of received keys S = { so , s2 , ... , sn \u2212 1 } is inserted into the filter by computing the hash values h0 , h1 , ... , hk \u2212 1 of s and setting the bits in the array that correspond to the hashed\nvalues .\nTo check whether an element x is in the Bloom filter , we hash it using the hash functions and check whether all positions in the bit array are set .\nIf at least one is not set , we know that the Bloom filter does not contain x .\nWhen using Bloom filters , the insertion of different elements might cause all the positions in the bit array corresponding to an element that is not in the set to be nonzero .\nIn this case , we have a false positive .\nTherefore , it is possible that peer B will not send a packet to peer A even though A is missing it .\nOn the other hand , a node will never send a packet that is described in the Bloom filter , i.e. there are no false negatives .\nThe probability of getting a false positive pf on the membership query can be expressed as a and the number of hash functions k : pf = ( 1 \u2212 e \u2212 kn/m ) k .\nWe can therefore choose the size of the Bloom filter and the number of hash functions that will yield a desired false positive ratio .\n2.4 TCP Friendly Rate Control\nAlthough most traffic in the Internet today is best served by TCP , applications that require a smooth sending rate and that have a higher tolerance for loss often find TCP 's reaction to a single dropped packet to be unnecessarily severe .\nTCP Friendly Rate Control , or TFRC , targets unicast streaming multimedia applications with a need for less drastic responses to single packet losses [ 15 ] .\nTCP halves the sending rate as soon as one packet loss is detected .\nAlternatively , TFRC is an equation-based congestion control protocol that is based on loss events , which consist of multiple packets being dropped within one round-trip time .\nUnlike TCP , the goal of TFRC is not to find and use all available bandwidth , but instead to maintain a relatively steady sending rate while still being responsive to congestion .\nTo guarantee fairness with TCP , TFRC uses the response function that describes the steady-state sending rate of TCP to determine the transmission rate in TFRC .\nThe formula of the TCP response function [ 27 ] used in TFRC to describe the sending rate is :\nThis is the expression for the sending rate T in bytes/second , as a function of the round-trip time R in seconds , loss event rate p , packet size s in bytes , and TCP retransmit value tRT O in seconds .\nTFRC senders and receivers must cooperate to achieve a smooth transmission rate .\nThe sender is responsible for computing the weighted round-trip time estimate R between sender and receiver , as well as determining a reasonable retransmit timeout value tRT O .\nIn most cases , using the simple formula tRT O = 4R provides the necessary fairness with TCP .\nThe sender is also responsible for adjusting the sending rate T in response to new values of the loss event rate p reported by the receiver .\nThe sender obtains a new measure for the loss event rate each time a feedback packet is received from the receiver .\nUntil the first loss is reported , the sender doubles its transmission rate each time it receives feedback just as TCP does during slow-start .\nThe main role of the receiver is to send feedback to the sender once per round-trip time and to calculate the loss event rate included in the feedback packets .\nTo obtain the loss event rate , the receiver maintains a loss interval array that contains values for the last eight loss intervals .\nA loss interval is defined as the number of packets received correctly between two loss events .\nThe array is continually updated as losses are detected .\nA weighted average is computed based on the sum of the loss interval values , and the inverse of the sum is the reported loss event rate , p .\nWhen implementing Bullet , we used an unreliable version of TFRC .\nWe wanted a transport protocol that was congestion aware and TCP friendly .\nLost packets were more easily recovered from other sources rather than waiting for a retransmission from the initial sender .\nHence , we eliminate retransmissions from TFRC .\nFurther , TFRC does not aggressively seek newly available bandwidth like TCP , a desirable trait in an overlay tree where there might be multiple competing flows sharing the same links .\nFor example , if a leaf node in the tree tried to aggressively seek out new bandwidth , it could create congestion all the way up to the root of the tree .\nBy using TFRC we were able to avoid these scenarios .\n3 .\nBULLET\nBullet is an efficient data distribution system for bandwidth intensive applications .\nWhile many current overlay network distribution algorithms use a distribution tree to deliver data from the tree 's root to all other nodes , Bullet layers a mesh on top of an original overlay tree to increase overall bandwidth to all nodes in the tree .\nHence , each node receives a parent stream from its parent in the tree and some number of perpendicular streams from chosen peers in the overlay .\nThis has significant bandwidth impact when a single node in the overlay is unable to deliver adequate bandwidth to a receiving node .\nBullet requires an underlying overlay tree for RanSub to deliver random subsets of participants 's state to nodes in the overlay , informing them of a set of nodes that may be good candidates for retrieving data not available from any of the node 's current peers and parent .\nWhile we also use the underlying tree for baseline streaming , this is not critical to Bullet 's ability to efficiently deliver data to nodes in the overlay .\nAs a result , Bullet is capable of functioning on top of essentially any overlay tree .\nIn our experiments , we have run Bullet over random and bandwidth-optimized trees created offline ( with global topological knowledge ) .\nBullet registers itself with the underlying overlay tree so that it is informed when the overlay changes as nodes come and go or make performance transformations in the overlay .\nAs with streaming overlays trees , Bullet can use standard transports such as TCP and UDP as well as our implementation of TFRC .\nFor the remainder of this paper , we assume the use of TFRC since we primarily target streaming highbandwidth content and we do not require reliable or in-order delivery .\nFor simplicity , we assume that packets originate at the root of the tree and are tagged with increasing sequence numbers .\nEach node receiving a packet will optionally forward it to each of its children , depending on a number of factors relating to the child 's bandwidth and its relative position in the tree .\n3.1 Finding Overlay Peers\nRanSub periodically delivers subsets of uniformly random selected nodes to each participant in the overlay .\nBullet receivers use these lists to locate remote peers able to transmit missing data items with good bandwidth .\nRanSub messages contain a set of summary tickets that include a small ( 120 function of the ratio m\nbytes ) summary of the data that each node contains .\nRanSub delivers subsets of these summary tickets to nodes every configurable epoch ( 5 seconds by default ) .\nEach node in the tree maintains a working set of the packets it has received thus far , indexed by sequence numbers .\nNodes associate each working set with a Bloom filter that maintains a summary of the packets received thus far .\nSince the Bloom filter does not exceed a specific size ( m ) and we would like to limit the rate of false positives , Bullet periodically cleans up the Bloom filter by removing lower sequence numbers from it .\nThis allows us to keep the Bloom filter population n from growing at an unbounded rate .\nThe net effect is that a node will attempt to recover packets for a finite amount of time depending on the packet arrival rate .\nSimilarly , Bullet removes older items that are not needed for data reconstruction from its working set and summary ticket .\nWe use the collect and distribute phases of RanSub to carry Bullet summary tickets up and down the tree .\nIn our current implementation , we use a set size of 10 summary tickets , allowing each collect and distribute to fit well within the size of a non-fragmented IP packet .\nThough Bullet supports larger set sizes , we expect this parameter to be tunable to specific applications ' needs .\nIn practice , our default size of 10 yields favorable results for a variety of overlays and network topologies .\nIn essence , during an epoch a node receives a summarized partial view of the system 's state at that time .\nUpon receiving a random subset each epoch , a Bullet node may choose to peer with the node having the lowest similarity ratio when compared to its own summary ticket .\nThis is done only when the node has sufficient space in its sender list to accept another sender ( senders with lackluster performance are removed from the current sender list as described in section 3.4 ) .\nOnce a node has chosen the best node it sends it a peering request containing the requesting node 's Bloom filter .\nSuch a request is accepted by the potential sender if it has sufficient space in its receiver list for the incoming receiver .\nOtherwise , the send request is rejected ( space is periodically created in the receiver lists as further described in section 3.4 ) .\n3.2 Recovering Data From Peers\nAssuming it has space for the new peer , a recipient of the peering request installs the received Bloom filter and will periodically transmit keys not present in the Bloom filter to the requesting node .\nThe requesting node will refresh its installed Bloom filters at each of its sending peers periodically .\nAlong with the fresh filter , a receiving node will also assign a portion of the sequence space to each of its senders .\nIn this way , a node is able the reduce the likelihood that two peers simultaneously transmit the same key to it , wasting network resources .\nA node divides the sequence space in its current working set among each of its senders uniformly .\nAs illustrated in Figure 4 , a Bullet receiver views the data space as a matrix of packet sequences containing s rows , where s is its current number of sending peers .\nA receiver periodically ( every 5 seconds by default ) updates each sender with its current Bloom filter and the range of sequences covered in its Bloom filter .\nThis identifies the range of packets that the receiver is currently interested in recovering .\nOver time , this range shifts as depicted in Figure 4-b ) .\nIn addition , the receiving node assigns to each sender a row from the matrix , labeled mod .\nA sender will forward packets to\nFigure 4 : A Bullet receiver views data as a matrix\nof sequenced packets with rows equal to the number of peer senders it currently has .\nIt requests data within the range ( Low , High ) of sequence numbers based on what it has received .\na ) The receiver requests a specific row in the sequence matrix from each sender .\nb ) As it receives more data , the range of sequences advances and the receiver requests different rows from senders .\nthe receiver that have a sequence number x such that x modulo s equals the mod number .\nIn this fashion , receivers register to receive disjoint data from their sending peers .\nBy specifying ranges and matrix rows , a receiver is unlikely to receive duplicate data items , which would result in wasted bandwidth .\nA duplicate packet , however , may be received when a parent recovers a packet from one of its peers and relays the packet to its children ( and descendants ) .\nIn this case , a descendant would receive the packet out of order and may have already recovered it from one of its peers .\nIn practice , this wasteful reception of duplicate packets is tolerable ; less than 10 % of all received packets are duplicates in our experiments .\n3.3 Making Data Disjoint\nWe now provide details of Bullet 's mechanisms to increase the ease by which nodes can find disjoint data not provided by parents .\nWe operate on the premise that the main challenge in recovering lost data packets transmitted over an overlay distribution tree lies in finding the peer node housing the data to recover .\nMany systems take a hierarchical approach to this problem , propagating repair requests up the distribution tree until the request can be satisfied .\nThis ultimately leads to scalability issues at higher levels in the hierarchy particularly when overlay links are bandwidthconstrained .\nOn the other hand , Bullet attempts to recover lost data from any non-descendant node , not just ancestors , thereby increasing overall system scalability .\nIn traditional overlay distribution trees , packets are lost by the transmission transport and/or the network .\nNodes attempt to stream data as fast as possible to each child and have essentially no control over which portions of the data stream are dropped by the transport or network .\nAs a result , the streaming subsystem has no control over how many nodes in the system will ultimately receive a particular portion of the data .\nIf few nodes receive a particular range of packets , recovering these pieces of data becomes more difficult , requiring increased communication costs , and leading to scalability problems .\nIn contrast , Bullet nodes are aware of the bandwidth achievable to each of its children using the underlying transport .\nIf\na child is unable to receive the streaming rate that the parent receives , the parent consciously decides which portion of the data stream to forward to the constrained child .\nIn addition , because nodes recover data from participants chosen uniformly at random from the set of non-descendants , it is advantageous to make each transmitted packet recoverable from approximately the same number of participant nodes .\nThat is , given a randomly chosen subset of peer nodes , it is with the same probability that each node has a particular data packet .\nWhile not explicitly proven here , we believe that this approach maximizes the probability that a lost data packet can be recovered , regardless of which packet is lost .\nTo this end , Bullet distributes incoming packets among one or more children in hopes that the expected number of nodes receiving each packet is approximately the same .\nA node p maintains for each child , i , a limiting and sending factor , lfi and sfi .\nThese factors determine the proportion of p 's received data rate that it will forward to each child .\nThe sending factor sfi is the portion of the parent stream ( rate ) that each child should `` own '' based on the number of descendants the child has .\nThe more descendants a child has , the larger the portion of received data it should own .\nThe limiting factor lfi represents the proportion of the parent rate beyond the sending factor that each child can handle .\nFor example , a child with one descendant , but high bandwidth would have a low sending factor , but a very high limiting factor .\nThough the child is responsible for owning a small portion of the received data , it actually can receive a large portion of it .\nBecause RanSub collects descendant counts di for each child i , Bullet simply makes a call into RanSub when sending data to determine the current sending factors of its children .\nFor each child i out of k total , we set the sending factor to be :\nIn addition , a node tracks the data successfully transmitted via the transport .\nThat is , Bullet data transport sockets are non-blocking ; successful transmissions are send attempts that are accepted by the non-blocking transport .\nIf the transport would block on a send ( i.e. , transmission of the packet would exceed the TCP-friendly fair share of network resources ) , the send fails and is counted as an unsuccessful send attempt .\nWhen a data packet is received by a parent , it calculates the proportion of the total data stream that has been sent to each child , thus far , in this epoch .\nIt then assigns ownership of the current packet to the child with sending proportion farthest away from its sfi as illustrated in Figure 5 .\nHaving chosen the target of a particular packet , the parent attempts to forward the packet to the child .\nIf the send is not successful , the node must find an alternate child to own the packet .\nThis occurs when a child 's bandwidth is not adequate to fulfill its responsibilities based on its descendants ( sfi ) .\nTo compensate , the node attempts to deterministically find a child that can own the packet ( as evidenced by its transport accepting the packet ) .\nThe net result is that children with more than adequate bandwidth will own more of their share of packets than those with inadequate bandwidth .\nIn the event that no child can accept a packet , it must be dropped , corresponding to the case where the sum of all children bandwidths is inadequate to serve the received\nFigure 5 : Pseudo code for Bullet 's disjoint data send routine\nstream .\nWhile making data more difficult to recover , Bullet still allows for recovery of such data to its children .\nThe sending node will cache the data packet and serve it to its requesting peers .\nThis process allows its children to potentially recover the packet from one of their own peers , to whom additional bandwidth may be available .\nOnce a packet has been successfully sent to the owning child , the node attempts to send the packet to all other children depending on the limiting factors lfi .\nFor each child i , a node attempts to forward the packet deterministically if the packet 's sequence modulo 1/lfi is zero .\nEssentially , this identifies which lfi fraction of packets of the received data stream should be forwarded to each child to make use of the available bandwidth to each .\nIf the packet transmission is successful , lfi is increased such that one more packet is to be sent per epoch .\nIf the transmission fails , lfi is decreased by the same amount .\nThis allows children limiting factors to be continuously adjusted in response to changing network conditions .\nIt is important to realize that by maintaining limiting factors , we are essentially using feedback from children ( by observing transport behavior ) to determine the best data to stop sending during times when a child can not handle the entire parent stream .\nIn one extreme , if the sum of children bandwidths is not enough to receive the entire parent stream , each child will receive a completely disjoint data stream of packets it owns .\nIn the other extreme , if each\nchild has ample bandwidth , it will receive the entire parent stream as each lfi would settle on 1.0 .\nIn the general case , our owning strategy attempts to make data disjoint among children subtrees with the guiding premise that , as much as possible , the expected number of nodes receiving a packet is the same across all packets .\n3.4 Improving the Bullet Mesh\nBullet allows a maximum number of peering relationships .\nThat is , a node can have up to a certain number of receivers and a certain number of senders ( each defaults to 10 in our implementation ) .\nA number of considerations can make the current peering relationships sub-optimal at any given time : i ) the probabilistic nature of RanSub means that a node may not have been exposed to a sufficiently appropriate peer , ii ) receivers greedily choose peers , and iii ) network conditions are constantly changing .\nFor example , a sender node may wind up being unable to provide a node with very much useful ( non-duplicate ) data .\nIn such a case , it would be advantageous to remove that sender as a peer and find some other peer that offers better utility .\nEach node periodically ( every few RanSub epochs ) evaluates the bandwidth performance it is receiving from its sending peers .\nA node will drop a peer if it is sending too many duplicate packets when compared to the total number of packets received .\nThis threshold is set to 50 % by default .\nIf no such wasteful sender is found , a node will drop the sender that is delivering the least amount of useful data to it .\nIt will replace this sender with some other sending peer candidate , essentially reserving a trial slot in its sender list .\nIn this way , we are assured of keeping the best senders seen so far and will eliminate senders whose performance deteriorates with changing network conditions .\nLikewise , a Bullet sender will periodically evaluate its receivers .\nEach receiver updates senders of the total received bandwidth .\nThe sender , knowing the amount of data it has sent to each receiver , can determine which receiver is benefiting the least by peering with this sender .\nThis corresponds to the one receiver acquiring the least portion of its bandwidth through this sender .\nThe sender drops this receiver , creating an empty slot for some other trial receiver .\nThis is similar to the concept of weans presented in [ 24 ] .\n4 .\nEVALUATION\nWe have evaluated Bullet 's performance in real Internet environments as well as the ModelNet [ 37 ] IP emulation framework .\nWhile the bulk of our experiments use ModelNet , we also report on our experience with Bullet on the PlanetLab Internet testbed [ 31 ] .\nIn addition , we have implemented a number of underlying overlay network trees upon which Bullet can execute .\nBecause Bullet performs well over a randomly created overlay tree , we present results with Bullet running over such a tree compared against an offline greedy bottleneck bandwidth tree algorithm using global topological information described in Section 4.1 .\nAll of our implementations leverage a common development infrastructure called MACEDON [ 33 ] that allows for the specification of overlay algorithms in a simple domainspecific language .\nIt enables the reuse of the majority of common functionality in these distributed systems , including probing infrastructures , thread management , message passing , and debugging environment .\nAs a result , we believe that our comparisons qualitatively show algorithmic differences rather than implementation intricacies .\nOur implementation of the core Bullet logic is under 1000 lines of code in this infrastructure .\nOur ModelNet experiments make use of 50 2Ghz Pentium4 's running Linux 2.4.20 and interconnected with 100 Mbps and 1 Gbps Ethernet switches .\nFor the majority of these experiments , we multiplex one thousand instances ( overlay participants ) of our overlay applications across the 50 Linux nodes ( 20 per machine ) .\nIn ModelNet , packet transmissions are routed through emulators responsible for accurately emulating the hop-by-hop delay , bandwidth , and congestion of a network topology .\nIn our evaluations , we used four 1.4 Ghz Pentium III 's running FreeBSD-4 .7 as emulators .\nThis platform supports approximately 2-3 Gbps of aggregate simultaneous communication among end hosts .\nFor most of our ModelNet experiments , we use 20,000-node INET-generated topologies [ 10 ] .\nWe randomly assign our participant nodes to act as clients connected to one-degree stub nodes in the topology .\nWe randomly select one of these participants to act as the source of the data stream .\nPropagation delays in the network topology are calculated based on the relative placement of the network nodes in the plane by INET .\nBased on the classification in [ 8 ] , we classify network links as being Client-Stub , Stub-Stub , TransitStub , and Transit-Transit depending on their location in the network .\nWe restrict topological bandwidth by setting the bandwidth for each link depending on its type .\nEach type of link has an associated bandwidth range from which the bandwidth is chosen uniformly at random .\nBy changing these ranges , we vary bandwidth constraints in our topologies .\nFor our experiments , we created three different ranges corresponding to low , medium , and high bandwidths relative to our typical streaming rates of 600-1000 Kbps as specified in Table 1 .\nWhile the presented ModelNet results are restricted to two topologies with varying bandwidth constraints , the results of experiments with additional topologies all show qualitatively similar behavior .\nWe do not implement any particular coding scheme for our experiments .\nRather , we assume that either each sequence number directly specifies a particular data block and the block offset for each packet , or we are distributing data within the same block for LT Codes , e.g. , when distributing a file .\n4.1 Offline Bottleneck Bandwidth Tree\nOne of our goals is to determine Bullet 's performance relative to the best possible bandwidth-optimized tree for a given network topology .\nThis allows us to quantify the possible improvements of an overlay mesh constructed using Bullet relative to the best possible tree .\nWhile we have not yet proven this , we believe that this problem is NP-hard .\nThus , in this section we present a simple greedy offline algorithm to determine the connectivity of a tree likely to deliver a high level of bandwidth .\nIn practice , we are not aware of any scalable online algorithms that are able to deliver the bandwidth of an offline algorithm .\nAt the same time , trees constructed by our algorithm tend to be `` long and skinny '' making them less resilient to failures and inappropriate for delay sensitive applications ( such as multimedia streaming ) .\nIn addition to any performance comparisons , a Bullet mesh has much lower depth than the bottleneck tree and is more resilient to failure , as discussed in Section 4.6 .\nTable 1 : Bandwidth ranges for link types used in our topologies expressed in Kbps .\nSpecifically , we consider the following problem : given complete knowledge of the topology ( individual link latencies , bandwidth , and packet loss rates ) , what is the overlay tree that will deliver the highest bandwidth to a set of predetermined overlay nodes ?\nWe assume that the throughput of the slowest overlay link ( the bottleneck link ) determines the throughput of the entire tree .\nWe are , therefore , trying to find the directed overlay tree with the maximum bottleneck link .\nAccordingly , we refer to this problem as the overlay maximum bottleneck tree ( OMBT ) .\nIn a simplified case , assuming that congestion only exists on access links and there are no lossy links , there exists an optimal algorithm [ 23 ] .\nIn the more general case of contention on any physical link , and when the system is allowed to choose the routing path between the two endpoints , this problem is known to be NP-hard [ 12 ] , even in the absence of link losses .\nFor the purposes of this paper , our goal is to determine a `` good '' overlay streaming tree that provides each overlay participant with substantial bandwidth , while avoiding overlay links with high end-to-end loss rates .\nWe make the following assumptions :\n1 .\nThe routing path between any two overlay participants is fixed .\nThis closely models the existing overlay network model with IP for unicast routing .\n2 .\nThe overlay tree will use TCP-friendly unicast connections to transfer data point-to-point .\n3 .\nIn the absence of other flows , we can estimate the throughput of a TCP-friendly flow using a steady-state formula [ 27 ] .\n4 .\nWhen several ( n ) flows share the same bottleneck link , each flow can achieve throughput of at most nc , where c is the physical capacity of the link .\nGiven these assumptions , we concentrate on estimating the throughput available between two participants in the overlay .\nWe start by calculating the throughput using the steady-state formula .\nWe then `` route '' the flow in the network , and consider the physical links one at a time .\nOn each physical link , we compute the fair-share for each of the competing flows .\nThe throughput of an overlay link is then approximated by the minimum of the fair-shares along the routing path , and the formula rate .\nIf some flow does not require the same share of the bottleneck link as other competing flows ( i.e. , its throughput might be limited by losses elsewhere in the network ) , then the other flows might end up with a greater share than the one we compute .\nWe do not account for this , as the major goal of this estimate is simply to avoid lossy and highly congested physical links .\nMore formally , we define the problem as follows : Overlay Maximum Bottleneck Tree ( OMBT ) .\nGiven a physical network represented as a graph G = ( V , E ) , set of overlay participants P C V , source node ( s E P ) , bandwidth B : E -- + R + , loss rate L : E -- + [ 0 , 1 ] , propagation delay D : E -- + R + of each link , set of possible overlay links O = { ( v , w ) | v , w E P , v = ~ w } , routing table\nputed from round-trip time d ( o ) = E where f ( o ) is the TCP steady-state sending rate , com\n( given overlay link o = ( v , w ) , o ' = ( w , v ) ) , and loss rate eEo ( 1 \u2212 l ( e ) ) .\nWe write e E o to express that link e is included in the o 's routing path ( RT ( o , e ) = 1 ) .\nAssuming that we can estimate the throughput of a flow , we proceed to formulate a greedy OMBT algorithm .\nThis algorithm is non-optimal , but a similar approach was found to perform well [ 12 ] .\nOur algorithm is similar to the Widest Path Heuristic ( WPH ) [ 12 ] , and more generally to Prim 's MST algorithm [ 32 ] .\nDuring its execution , we maintain the set of nodes already in the tree , and the set of remaining nodes .\nTo grow the tree , we consider all the overlay links leading from the nodes in the tree to the remaining nodes .\nWe greedily pick the node with the highest throughput overlay link .\nUsing this overlay link might cause us to route traffic over physical links traversed by some other tree flows .\nSince we do not re-examine the throughput of nodes that are already in the tree , they might end up being connected to the tree with slower overlay links than initially estimated .\nHowever , by attaching the node with the highest residual bandwidth at every step , we hope to lessen the effects of after-the-fact physical link sharing .\nWith the synthetic topologies we use for our emulation environment , we have not found this inaccuracy to severely impact the quality of the tree .\n4.2 Bullet vs. Streaming\nWe have implemented a simple streaming application that is capable of streaming data over any specified tree .\nIn our implementation , we are able to stream data through overlay trees using UDP , TFRC , or TCP .\nFigure 6 shows average bandwidth that each of 1000 nodes receives via this streaming as time progresses on the x-axis .\nIn this example , we use TFRC to stream 600 Kbps over our offline bottleneck bandwidth tree and a random tree ( other random trees exhibit qualitatively similar behavior ) .\nIn these experiments , streaming begins 100 seconds into each run .\nWhile the random tree delivers an achieved bandwidth of under 100 Kbps , our offline algorithm overlay delivers approximately 400 Kbps of data .\nFor this experiment , bandwidths were set to the medium range from Table 1 .\nWe believe that any degree-constrained online bandwidth overlay tree algorithm would exhibit similar ( or lower ) behavior to our bandwidth\nFigure 6 : Achieved bandwidth over time for TFRC streaming over the bottleneck bandwidth tree and a random tree .\noptimized overlay .\nHence , Bullet 's goal is to overcome this bandwidth limit by allowing for the perpendicular reception of data and by utilizing disjoint data flows in an attempt to match or exceed the performance of our offline algorithm .\nTo evaluate Bullet 's ability to exceed the bandwidth achievable via tree distribution overlays , we compare Bullet running over a random overlay tree to the streaming behavior shown in Figure 6 .\nFigure 7 shows the average bandwidth received by each node ( labeled Useful total ) with standard deviation .\nThe graph also plots the total amount of data received and the amount of data a node receives from its parent .\nFor this topology and bandwidth setting , Bullet was able to achieve an average bandwidth of 500 Kbps , fives times that achieved by the random tree and more than 25 % higher than the offline bottleneck bandwidth algorithm .\nFurther , the total bandwidth ( including redundant data ) received by each node is only slightly higher than the useful content , meaning that Bullet is able to achieve high bandwidth while wasting little network resources .\nBullet 's use of TFRC in this example ensures that the overlay is TCP friendly throughout .\nThe average per-node control overhead is approximately 30 Kbps .\nBy tracing certain packets as they move through the system , we are able to acquire link stress estimates of our system .\nThough the link stress can be different for each packet since each can take a different path through the overlay mesh , we average link stress due to each traced packet .\nFor this experiment , Bullet has an average link stress of approximately 1.5 with an absolute maximum link stress of 22 .\nThe standard deviation in most of our runs is fairly high because of the limited bandwidth randomly assigned to some Client-Stub and Stub-Stub links .\nWe feel that this is consistent with real Internet behavior where clients have widely varying network connectivity .\nA time slice is shown in Figure 8 that plots the CDF of instantaneous bandwidths that each node receives .\nThe graph shows that few client nodes receive inadequate bandwidth even though they are bandwidth constrained .\nThe distribution rises sharply starting at approximately 500 Kbps .\nThe vast majority of nodes receive a stream of 500-600 Kbps .\nWe have evaluated Bullet under a number of bandwidth constraints to determine how Bullet performs relative to the\nFigure 7 : Achieved bandwidth over time for Bullet over a random tree .\nFigure 8 : CDF of instantaneous achieved bandwidth at time 430 seconds .\navailable bandwidth of the underlying topology .\nTable 1 describes representative bandwidth settings for our streaming rate of 600 Kbps .\nThe intent of these settings is to show a scenario where more than enough bandwidth is available to achieve a target rate even with traditional tree streaming , an example of where it is slightly not sufficient , and one in which the available bandwidth is quite restricted .\nFigure 9 shows achieved bandwidths for Bullet and the bottleneck bandwidth tree over time generated from topologies with bandwidths in each range .\nIn all of our experiments , Bullet outperforms the bottleneck bandwidth tree by a factor of up to 100 % , depending on how much bandwidth is constrained in the underlying topology .\nIn one extreme , having more than ample bandwidth , Bullet and the bottleneck bandwidth tree are both able to stream at the requested rate ( 600 Kbps in our example ) .\nIn the other extreme , heavily constrained topologies allow Bullet to achieve twice the bandwidth achievable via the bottleneck bandwidth tree .\nFor all other topologies , Bullet 's benefits are somewhere in between .\nIn our example , Bullet running over our medium-constrained bandwidth topology is able to outperform the bottleneck bandwidth tree by a factor of 25 % .\nFurther , we stress that we believe it would\nFigure 9 : Achieved bandwidth for Bullet and bottleneck tree over time for high , medium , and low bandwidth topologies .\nbe extremely difficult for any online tree-based algorithm to exceed the bandwidth achievable by our offline bottleneck algorithm that makes use of global topological information .\nFor instance , we built a simple bandwidth optimizing overlay tree construction based on Overcast [ 21 ] .\nThe resulting dynamically constructed trees never achieved more than 75 % of the bandwidth of our own offline algorithm .\n4.3 Creating Disjoint Data\nBullet 's ability to deliver high bandwidth levels to nodes depends on its disjoint transmission strategy .\nThat is , when bandwidth to a child is limited , Bullet attempts to send the `` correct '' portions of data so that recovery of the lost data is facilitated .\nA Bullet parent sends different data to its children in hopes that each data item will be readily available to nodes spread throughout its subtree .\nIt does so by assigning ownership of data objects to children in a manner that makes the expected number of nodes holding a particular data object equal for all data objects it transmits .\nFigure 10 shows the resulting bandwidth over time for the non-disjoint strategy in which a node ( and more importantly , the root of the tree ) attempts to send all data to each of its children ( subject to independent losses at individual child links ) .\nBecause the children transports throttle the sending rate at each parent , some data is inherently sent disjointly ( by chance ) .\nBy not explicitly choosing which data to send its child , this approach deprives Bullet of 25 % of its bandwidth capability , when compared to the case when our disjoint strategy is enabled in Figure 7 .\n4.4 Epidemic Approaches\nIn this section , we explore how Bullet compares to data dissemination approaches that use some form of epidemic routing .\nWe implemented a form of `` gossiping '' , where a node forwards non-duplicate packets to a randomly chosen number of nodes in its local view .\nThis technique does not use a tree for dissemination , and is similar to lpbcast [ 14 ] ( recently improved to incorporate retrieval of data objects [ 13 ] ) .\nWe do not disseminate packets every T seconds ; instead we forward them as soon as they arrive .\nFigure 10 : Achieved bandwidth over time using nondisjoint data transmission .\nWe also implemented a pbcast-like [ 2 ] approach for retrieving data missing from a data distribution tree .\nThe idea here is that nodes are expected to obtain most of their data from their parent .\nNodes then attempt to retrieve any missing data items through gossiping with random peers .\nInstead of using gossiping with a fixed number of rounds for each packet , we use anti-entropy with a FIFO Bloom filter to attempt to locate peers that hold any locally missing data items .\nTo make our evaluation conservative , we assume that nodes employing gossip and anti-entropy recovery are able to maintain full group membership .\nWhile this might be difficult in practice , we assume that RanSub [ 24 ] could also be applied to these ideas , specifically in the case of anti-entropy recovery that employs an underlying tree .\nFurther , we also allow both techniques to reuse other aspects of our implementation : Bloom filters , TFRC transport , etc. .\nTo reduce the number of duplicate packets , we use less peers in each round ( 5 ) than Bullet ( 10 ) .\nFor our configuration , we experimentally found that 5 peers results in the best performance with the lowest overhead .\nIn our experiments , increasing the number of peers did not improve the average bandwidth achieved throughout the system .\nTo allow TFRC enough time to ramp up to the appropriate TCP-friendly sending rate , we set the epoch length for anti-entropy recovery to 20 seconds .\nFor these experiments , we use a 5000-node INET topology with no explicit physical link losses .\nWe set link bandwidths according to the medium range from Table 1 , and randomly assign 100 overlay participants .\nThe randomly chosen root either streams at 900 Kbps ( over a random tree for Bullet and greedy bottleneck tree for anti-entropy recovery ) , or sends packets at that rate to randomly chosen nodes for gossiping .\nFigure 11 shows the resulting bandwidth over time achieved by Bullet and the two epidemic approaches .\nAs expected , Bullet comes close to providing the target bandwidth to all participants , achieving approximately 60 percent more then gossiping and streaming with anti-entropy .\nThe two epidemic techniques send an excessive number of duplicates , effectively reducing the useful bandwidth provided to each node .\nMore importantly , both approaches assign equal significance to other peers , regardless of the available band\nFigure 11 : Achieved bandwidth over time for Bullet and epidemic approaches .\nwidth and the similarity ratio .\nBullet , on the other hand , establishes long-term connections with peers that provide good bandwidth and disjoint content , and avoids most of the duplicates by requesting disjoint data from each node 's peers .\n4.5 Bullet on a Lossy Network\nTo evaluate Bullet 's performance under more lossy network conditions , we have modified our 20,000-node topologies used in our previous experiments to include random packet losses .\nModelNet allows the specification of a packet loss rate in the description of a network link .\nOur goal by modifying these loss rates is to simulate queuing behavior when the network is under load due to background network traffic .\nTo effect this behavior , we first modify all non-transit links in each topology to have a packet loss rate chosen uniformly random from [ 0 , 0.003 ] resulting in a maximum loss rate of 0.3 % .\nTransit links are likewise modified , but with a maximum loss rate of 0.1 % .\nSimilar to the approach in [ 28 ] , we randomly designated 5 % of the links in the topologies as overloaded and set their loss rates uniformly random from [ 0.05 , 0.1 ] resulting in a maximum packet loss rate of 10 % .\nFigure 12 shows achieved bandwidths for streaming over Bullet and using our greedy offline bottleneck bandwidth tree .\nBecause losses adversely affect the bandwidth achievable over TCP-friendly transport and since bandwidths are strictly monotonically decreasing over a streaming tree , treebased algorithms perform considerably worse than Bullet when used on a lossy network .\nIn all cases , Bullet delivers at least twice as much bandwidth than the bottleneck bandwidth tree .\nAdditionally , losses in the low bandwidth topology essentially keep the bottleneck bandwidth tree from delivering any data , an artifact that is avoided by Bullet .\n4.6 Performance Under Failure\nIn this section , we discuss Bullet 's behavior in the face of node failure .\nIn contrast to streaming distribution trees that must quickly detect and make tree transformations to overcome failure , Bullet 's failure resilience rests on its ability to maintain a higher level of achieved bandwidth by virtue of perpendicular ( peer ) streaming .\nWhile all nodes under a failed node in a distribution tree will experience a temporary\nFigure 12 : Achieved bandwidths for Bullet and bottleneck bandwidth tree over a lossy network topology .\ndisruption in service , Bullet nodes are able compensate for this by receiving data from peers throughout the outage .\nBecause Bullet , and , more importantly , RanSub makes use of an underlying tree overlay , part of Bullet 's failure recovery properties will depend on the failure recovery behavior of the underlying tree .\nFor the purposes of this discussion , we simply assume the worst-case scenario where an underlying tree has no failure recovery .\nIn our failure experiments , we fail one of root 's children ( with 110 of the total 1000 nodes as descendants ) 250 seconds after data streaming is started .\nBy failing one of root 's children , we are able to show Bullet 's worst-case performance under a single node failure .\nIn our first scenario , we disable failure detection in RanSub so that after a failure occurs , Bullet nodes request data only from their current peers .\nThat is , at this point , RanSub stops functioning and no new peer relationships are created for the remainder of the run .\nFigure 13 shows Bullet 's achieved bandwidth over time for this case .\nWhile the average achieved rate drops from 500 Kbps to 350 Kbps , most nodes ( including the descendants of the failed root child ) are able to recover a large portion of the data rate .\nNext , we enable RanSub failure detection that recognizes a node 's failure when a RanSub epoch has lasted longer than the predetermined maximum ( 5 seconds for this test ) .\nIn this case , the root simply initiates the next distribute phase upon RanSub timeout .\nThe net result is that nodes that are not descendants of the failed node will continue to receive updated random subsets allowing them to peer with appropriate nodes reflecting the new network conditions .\nAs shown in Figure 14 , the failure causes a negligible disruption in performance .\nWith RanSub failure detection enabled , nodes quickly learn of other nodes from which to receive data .\nOnce such recovery completes , the descendants of the failed node use their already established peer relationships to compensate for their ancestor 's failure .\nHence , because Bullet is an overlay mesh , its reliability characteristics far exceed that of typical overlay distribution trees .\n4.7 PlanetLab\nThis section contains results from the deployment of Bullet over the PlanetLab [ 31 ] wide-area network testbed .\nFor\nFigure 13 : Bandwidth over time with a worst-case node failure and no RanSub recovery .\nFigure 14 : Bandwidth over time with a worst-case node failure and RanSub recovery enabled .\nour first experiment , we chose 47 nodes for our deployment , with no two machines being deployed at the same site .\nSince there is currently ample bandwidth available throughout the PlanetLab overlay ( a characteristic not necessarily representative of the Internet at large ) , we designed this experiment to show that Bullet can achieve higher bandwidth than an overlay tree when the source is constrained , for instance in cases of congestion on its outbound access link , or of overload by a flash-crowd .\nWe did this by choosing a root in Europe connected to PlanetLab with fairly low bandwidth .\nThe node we selected was in Italy ( cs.unibo.it ) and we had 10 other overlay nodes in Europe .\nWithout global knowledge of the topology in PlanetLab ( and the Internet ) , we are , of course , unable to produce our greedy bottleneck bandwidth tree for comparison .\nWe ran Bullet over a random overlay tree for 300 seconds while attempting to stream at a rate of 1.5 Mbps .\nWe waited 50 seconds before starting to stream data to allow nodes to successfully join the tree .\nWe compare the performance of Bullet to data streaming over multiple handcrafted trees .\nFigure 15 shows our results for two such trees .\nThe `` good '' tree has all nodes in Europe located high in the tree , close to the root .\nWe used pathload [ 20 ] to measure the\nFigure 15 : Achieved bandwidth over time for Bullet and TFRC streaming over different trees on PlanetLab with a root in Europe .\navailable bandwidth between the root and all other nodes .\nNodes with high bandwidth measurements were placed close to the root .\nIn this case , we are able to achieve a bandwidth of approximately 300 Kbps .\nThe `` worst '' tree was created by setting the root 's children to be the three nodes with the worst bandwidth characteristics from the root as measured by pathload .\nAll subsequent levels in the tree were set in this fashion .\nFor comparison , we replaced all nodes in Europe from our topology with nodes in the US , creating a topology that only included US nodes with high bandwidth characteristics .\nAs expected , Bullet was able to achieve the full 1.5 Mbps rate in this case .\nA well constructed tree over this highbandwidth topology yielded slightly lower than 1.5 Mbps , verifying that our approach does not sacrifice performance under high bandwidth conditions and improves performance under constrained bandwidth scenarios .\n5 .\nRELATED WORK\nSnoeren et al. [ 36 ] use an overlay mesh to achieve reliable and timely delivery of mission-critical data .\nIn this system , every node chooses n `` parents '' from which to receive duplicate packet streams .\nSince its foremost emphasis is reliability , the system does not attempt to improve the bandwidth delivered to the overlay participants by sending disjoint data at each level .\nFurther , during recovery from parent failure , it limits an overlay router 's choice of parents to nodes with a level number that is less than its own level number .\nThe power of `` perpendicular '' downloads is perhaps best illustrated by Kazaa [ 22 ] , the popular peer-to-peer file swapping network .\nKazaa nodes are organized into a scalable , hierarchical structure .\nIndividual users search for desired content in the structure and proceed to simultaneously download potentially disjoint pieces from nodes that already have it .\nSince Kazaa does not address the multicast communication model , a large fraction of users downloading the same file would consume more bandwidth than nodes organized into the Bullet overlay structure .\nKazaa does not use erasure coding ; therefore it may take considerable time to locate `` the last few bytes . ''\nBitTorrent [ 3 ] is another example of a file distribution system currently deployed on the Internet .\nIt utilizes trackers that direct downloaders to random subsets of machines that already have portions of the file .\nThe tracker poses a scalability limit , as it continuously updates the systemwide distribution of the file .\nLowering the tracker communication rate could hurt the overall system performance , as information might be out of date .\nFurther , BitTorrent does not employ any strategy to disseminate data to different regions of the network , potentially making it more difficult to recover data depending on client access patterns .\nSimilar to Bullet , BitTorrent incorporates the notion of `` choking '' at each node with the goal of identifying receivers that benefit the most by downloading from that particular source .\nFastReplica [ 11 ] addresses the problem of reliable and efficient file distribution in content distribution networks ( CDNs ) .\nIn the basic algorithm , nodes are organized into groups of fixed size ( n ) , with full group membership information at each node .\nTo distribute the file , a node splits it into n equal-sized portions , sends the portions to other group members , and instructs them to download the missing pieces in parallel from other group members .\nSince only a fixed portion of the file is transmitted along each of the overlay links , the impact of congestion is smaller than in the case of tree distribution .\nHowever , since it treats all paths equally , FastReplica does not take full advantage of highbandwidth overlay links in the system .\nSince it requires file store-and-forward logic at each level of the hierarchy necessary for scaling the system , it may not be applicable to high-bandwidth streaming .\nThere are numerous protocols that aim to add reliability to IP multicast .\nIn Scalable Reliable Multicast ( SRM ) [ 16 ] , nodes multicast retransmission requests for missed packets .\nTwo techniques attempt to improve the scalability of this approach : probabilistic choice of retransmission timeouts , and organization of receivers into hierarchical local recovery groups .\nHowever , it is difficult to find appropriate timer values and local scoping settings ( via the TTL field ) for a wide range of topologies , number of receivers , etc. even when adaptive techniques are used .\nOne recent study [ 2 ] shows that SRM may have significant overhead due to retransmission requests .\nBullet is closely related to efforts that use epidemic data propagation techniques to recover from losses in the nonreliable IP-multicast tree .\nIn pbcast [ 2 ] , a node has global group membership , and periodically chooses a random subset of peers to send a digest of its received packets .\nA node that receives the digest responds to the sender with the missing packets in a last-in , first-out fashion .\nLbpcast [ 14 ] addresses pbcast 's scalability issues ( associated with global knowledge ) by constructing , in a decentralized fashion , a partial group membership view at each node .\nThe average size of the views is engineered to allow a message to reach all participants with high probability .\nSince lbpcast does not require an underlying tree for data distribution and relies on the push-gossiping model , its network overhead can be quite high .\nCompared to the reliable multicast efforts , Bullet behaves favorably in terms of the network overhead because nodes do not `` blindly '' request retransmissions from their peers .\nInstead , Bullet uses the summary views it obtains through RanSub to guide its actions toward nodes with disjoint content .\nFurther , a Bullet node splits the retransmission load between all of its peers .\nWe note that pbcast nodes contain a mechanism to rate-limit retransmitted packets and to send different packets in response to the same digest .\nHowever , this does not guarantee that packets received in parallel from multiple peers will not be duplicates .\nMore importantly , the multicast recovery methods are limited by the bandwidth through the tree , while Bullet strives to provide more bandwidth to all receivers by making data deliberately disjoint throughout the tree .\nNarada [ 19 ] builds a delay-optimized mesh interconnecting all participating nodes and actively measures the available bandwidth on overlay links .\nIt then runs a standard routing protocol on top of the overlay mesh to construct forwarding trees using each node as a possible source .\nNarada nodes maintain global knowledge about all group participants , limiting system scalability to several tens of nodes .\nFurther , the bandwidth available through a Narada tree is still limited to the bandwidth available from each parent .\nOn the other hand , the fundamental goal of Bullet is to increase bandwidth through download of disjoint data from multiple peers .\nOvercast [ 21 ] is an example of a bandwidth-efficient overlay tree construction algorithm .\nIn this system , all nodes join at the root and migrate down to the point in the tree where they are still able to maintain some minimum level of bandwidth .\nBullet is expected to be more resilient to node departures than any tree , including Overcast .\nInstead of a node waiting to get the data it missed from a new parent , a node can start getting data from its perpendicular peers .\nThis transition is seamless , as the node that is disconnected from its parent will start demanding more missing packets from its peers during the standard round of refreshing its filters .\nOvercast convergence time is limited by probes to immediate siblings and ancestors .\nBullet is able to provide approximately a target bandwidth without having a fully converged tree .\nIn parallel to our own work , SplitStream [ 9 ] also has the goal of achieving high bandwidth data dissemination .\nIt operates by splitting the multicast stream into k stripes , transmitting each stripe along a separate multicast tree built using Scribe [ 34 ] .\nThe key design goal of the tree construction mechanism is to have each node be an intermediate node in at most one tree ( while observing both inbound and outbound node bandwidth constraints ) , thereby reducing the impact of a single node 's sudden departure on the rest of the system .\nThe join procedure can potentially sacrifice the interior-node-disjointness achieved by Scribe .\nPerhaps more importantly , SplitStream assumes that there is enough available bandwidth to carry each stripe on every link of the tree , including the links between the data source and the roots of individual stripe trees independently chosen by Scribe .\nTo some extent , Bullet and SplitStream are complementary .\nFor instance , Bullet could run on each of the stripes to maximize the bandwidth delivered to each node along each stripe .\nCoopNet [ 29 ] considers live content streaming in a peerto-peer environment , subject to high node churn .\nConsequently , the system favors resilience over network efficiency .\nIt uses a centralized approach for constructing either random or deterministic node-disjoint ( similar to SplitStream ) trees , and it includes an MDC [ 17 ] adaptation framework based on scalable receiver feedback that attempts to maximize the signal-to-noise ratio perceived by receivers .\nIn the case of on-demand streaming , CoopNet [ 30 ] addresses\nthe flash-crowd problem at the central server by redirecting incoming clients to a fixed number of nodes that have previously retrieved portions of the same content .\nCompared to CoopNet , Bullet provides nodes with a uniformly random subset of the system-wide distribution of the file .\n6 .\nCONCLUSIONS\nTypically , high bandwidth overlay data streaming takes place over a distribution tree .\nIn this paper , we argue that , in fact , an overlay mesh is able to deliver fundamentally higher bandwidth .\nOf course , a number of difficult challenges must be overcome to ensure that nodes in the mesh do not repeatedly receive the same data from peers .\nThis paper presents the design and implementation of Bullet , a scalable and efficient overlay construction algorithm that overcomes this challenge to deliver significant bandwidth improvements relative to traditional tree structures .\nSpecifically , this paper makes the following contributions : 9 We present the design and analysis of Bullet , an overlay construction algorithm that creates a mesh over any distribution tree and allows overlay participants to achieve a higher bandwidth throughput than traditional data streaming .\nAs a related benefit , we eliminate the overhead required to probe for available bandwidth in traditional distributed tree construction techniques .\n9 We provide a technique for recovering missing data from peers in a scalable and efficient manner .\nRanSub periodically disseminates summaries of data sets received by a changing , uniformly random subset of global participants .\n9 We propose a mechanism for making data disjoint and then distributing it in a uniform way that makes the probability of finding a peer containing missing data equal for all nodes .\n9 A large-scale evaluation of 1000 overlay participants running in an emulated 20,000 node network topology , as well as experimentation on top of the PlanetLab Internet testbed , shows that Bullet running over a random tree can achieve twice the throughput of streaming over a traditional bandwidth tree ."} {"id": "J-28", "title": "", "abstract": "", "keyphrases": ["approxim-effici and approximatelystrategyproof auction mechan", "singl-good multi-unit alloc problem", "fulli polynomi-time approxim scheme", "vickrei-clark-grove", "forward auction", "revers auction", "equilibrium", "margin-decreas piecewis constant curv", "bid languag", "dynam program", "approxim algorithm", "multi-unit auction", "strategyproof"], "prmu": [], "lvl-1": "Approximately-Strategyproof and Tractable Multi-Unit Auctions Anshul Kothari\u2217 David C. Parkes\u2020 Subhash Suri\u2217 ABSTRACT We present an approximately-efficient and approximatelystrategyproof auction mechanism for a single-good multi-unit allocation problem.\nThe bidding language in our auctions allows marginal-decreasing piecewise constant curves.\nFirst, we develop a fully polynomial-time approximation scheme for the multi-unit allocation problem, which computes a (1 + )approximation in worst-case time T = O(n3 / ), given n bids each with a constant number of pieces.\nSecond, we embed this approximation scheme within a Vickrey-Clarke-Groves (VCG) mechanism and compute payments to n agents for an asymptotic cost of O(T log n).\nThe maximal possible gain from manipulation to a bidder in the combined scheme is bounded by /(1+ )V , where V is the total surplus in the efficient outcome.\nCategories and Subject Descriptors F.2 [Theory of Computation]: Analysis of Algorithms and Problem Complexity; J.4 [Computer Applications]: Social and Behavioral Sciences-Economics.\nGeneral Terms Algorithms, Economics.\n1.\nINTRODUCTION In this paper we present a fully polynomial-time approximation scheme for the single-good multi-unit auction problem.\nOur scheme is both approximately efficient and approximately strategyproof.\nThe auction settings considered in our paper are motivated by recent trends in electronic commerce; for instance, corporations are increasingly using auctions for their strategic sourcing.\nWe consider both a reverse auction variation and a forward auction variation, and propose a compact and expressive bidding language that allows marginal-decreasing piecewise constant curves.\nIn the reverse auction, we consider a single buyer with a demand for M units of a good and n suppliers, each with a marginal-decreasing piecewise-constant cost function.\nIn addition, each supplier can also express an upper bound, or capacity constraint on the number of units she can supply.\nThe reverse variation models, for example, a procurement auction to obtain raw materials or other services (e.g. circuit boards, power suppliers, toner cartridges), with flexible-sized lots.\nIn the forward auction, we consider a single seller with M units of a good and n buyers, each with a marginal-decreasing piecewise-constant valuation function.\nA buyer can also express a lower bound, or minimum lot size, on the number of units she demands.\nThe forward variation models, for example, an auction to sell excess inventory in flexible-sized lots.\nWe consider the computational complexity of implementing the Vickrey-Clarke-Groves [22, 5, 11] mechanism for the multiunit auction problem.\nThe Vickrey-Clarke-Groves (VCG) mechanism has a number of interesting economic properties in this setting, including strategyproofness, such that truthful bidding is a dominant strategy for buyers in the forward auction and sellers in the reverse auction, and allocative efficiency, such that the outcome maximizes the total surplus in the system.\nHowever, as we discuss in Section 2, the application of the VCG-based approach is limited in the reverse direction to instances in which the total payments to the sellers are less than the value of the outcome to the buyer.\nOtherwise, either the auction must run at a loss in these instances, or the buyer cannot be expected to voluntarily choose to participate.\nThis is an example of the budget-deficit problem that often occurs in efficient mechanism design [17].\nThe computational problem is interesting, because even with marginal-decreasing bid curves, the underlying allocation problem turns out to (weakly) intractable.\nFor instance, the classic 0/1 knapsack is a special case of this problem.1 We model the 1 However, the problem can be solved easily by a greedy scheme if we remove all capacity constraints from the seller and all 166 allocation problem as a novel and interesting generalization of the classic knapsack problem, and develop a fully polynomialtime approximation scheme, computing a (1 + )-approximation in worst-case time T = O(n3 /\u03b5), where each bid has a fixed number of piecewise constant pieces.\nGiven this scheme, a straightforward computation of the VCG payments to all n agents requires time O(nT).\nWe compute approximate VCG payments in worst-case time O(\u03b1T log(\u03b1n/\u03b5)), where \u03b1 is a constant that quantifies a reasonable no-monopoly assumption.\nSpecifically, in the reverse auction, suppose that C(I) is the minimal cost for procuring M units with all sellers I, and C(I \\ i) is the minimal cost without seller i. Then, the constant \u03b1 is defined as an upper bound for the ratio C(I \\i)/C(I), over all sellers i.\nThis upper-bound tends to 1 as the number of sellers increases.\nThe approximate VCG mechanism is ( \u03b5 1+\u03b5 )-strategyproof for an approximation to within (1 + ) of the optimal allocation.\nThis means that a bidder can gain at most ( \u03b5 1+\u03b5 )V from a nontruthful bid, where V is the total surplus from the efficient allocation.\nAs such, this is an example of a computationally-tractable \u03b5-dominance result.2 In practice, we can have good confidence that bidders without good information about the bidding strategies of other participants will have little to gain from attempts at manipulation.\nSection 2 formally defines the forward and reverse auctions, and defines the VCG mechanisms.\nWe also prove our claims about \u03b5-strategyproofness.\nSection 3 provides the generalized knapsack formulation for the multi-unit allocation problems and introduces the fully polynomial time approximation scheme.\nSection 4 defines the approximation scheme for the payments in the VCG mechanism.\nSection 5 concludes.\n1.1 Related Work There has been considerable interest in recent years in characterizing polynomial-time or approximable special cases of the general combinatorial allocation problem, in which there are multiple different items.\nThe combinatorial allocation problem (CAP) is both NP-complete and inapproximable (e.g. [6]).\nAlthough some polynomial-time cases have been identified for the CAP [6, 20], introducing an expressive exclusive-or bidding language quickly breaks these special cases.\nWe identify a non-trivial but approximable allocation problem with an expressive exclusiveor bidding language-the bid taker in our setting is allowed to accept at most one point on the bid curve.\nThe idea of using approximations within mechanisms, while retaining either full-strategyproofness or \u03b5-dominance has received some previous attention.\nFor instance, Lehmann et al. [15] propose a greedy and strategyproof approximation to a single-minded combinatorial auction problem.\nNisan & Ronen [18] discussed approximate VCG-based mechanisms, but either appealed to particular maximal-in-range approximations to retain full strategyproofness, or to resource-bounded agents with information or computational limitations on the ability to compute strategies.\nFeigenminimum-lot size constraints from the buyers.\n2 However, this may not be an example of what Feigenbaum & Shenker refer to as a tolerably-manipulable mechanism [8] because we have not tried to bound the effect of such a manipulation on the efficiency of the outcome.\nVCG mechanism do have a natural self-correcting property, though, because a useful manipulation to an agent is a reported value that improves the total value of the allocation based on the reports of other agents and the agent``s own value.\nbaum & Shenker [8] have defined the concept of strategically faithful approximations, and proposed the study of approximations as an important direction for algorithmic mechanism design.\nSchummer [21] and Parkes et al [19] have previously considered \u03b5-dominance, in the context of economic impossibility results, for example in combinatorial exchanges.\nEso et al. [7] have studied a similar procurement problem, but for a different volume discount model.\nThis earlier work formulates the problem as a general mixed integer linear program, and gives some empirical results on simulated data.\nKalagnanam et al. [12] address double auctions, where multiple buyers and sellers trade a divisible good.\nThe focus of this paper is also different: it investigates the equilibrium prices using the demand and supply curves, whereas our focus is on efficient mechanism design.\nAusubel [1] has proposed an ascending-price multi-unit auction for buyers with marginal-decreasing values [1], with an interpretation as a primal-dual algorithm [2].\n2.\nAPPROXIMATELY-STRATEGYPROOF VCG AUCTIONS In this section, we first describe the marginal-decreasing piecewise bidding language that is used in our forward and reverse auctions.\nContinuing, we introduce the VCG mechanism for the problem and the \u03b5-dominance results for approximations to VCG outcomes.\nWe also discuss the economic properties of VCG mechanisms in these forward and reverse auction multi-unit settings.\n2.1 Marginal-Decreasing Piecewise Bids We provide a piecewise-constant and marginal-decreasing bidding language.\nThis bidding language is expressive for a natural class of valuation and cost functions: fixed unit prices over intervals of quantities.\nSee Figure 1 for an example.\nIn addition, we slightly relax the marginal-decreasing requirement to allow: a bidder in the forward auction to state a minimal purchase amount, such that she has zero value for quantities smaller than that amount; a seller in the reverse auction to state a capacity constraint, such that she has an effectively infinite cost to supply quantities in excess of a particular amount.\nReverse Auction Bid 7 5 10 20 25 10 8 Quantity Price 7 5 10 20 25 10 8 Quantity Price Forward Auction Bid Figure 1: Marginal-decreasing, piecewise constant bids.\nIn the forward auction bid, the bidder offers $10 per unit for quantity in the range [5, 10), $8 per unit in the range [10, 20), and $7 in the range [20, 25].\nHer valuation is zero for quantities outside the range [10, 25].\nIn the reverse auction bid, the cost of the seller is \u221e outside the range [10, 25].\nIn detail, in a forward auction, a bid from buyer i can be written as a list of (quantity-range, unit-price) tuples, ((u1 i , p1 i ), (u2 i , p2 i ), ... , (umi\u22121 i , pmi\u22121 i )), with an upper bound umi i on the quantity.\nThe interpretation is that the bidder``s valuation in the 167 (semi-open) quantity range [uj i , uj+1 i ) is pj i for each unit.\nAdditionally, it is assumed that the valuation is 0 for quantities less than u1 i as well as for quantities more than um i .\nThis is implemented by adding two dummy bid tuples, with zero prices in the range [0, u1 i ) and (umi i , \u221e).\nWe interpret the bid list as defining a price function, pbid,i(q) = qpj i , if uj i \u2264 q < uj+1 i , where j = 1, 2, ... , mi \u22121.\nIn order to resolve the boundary condition, we assume that the bid price for the upper bound quantity umi i is pbid,i(umi i ) = umi i pmi\u22121 i .\nA seller``s bid is similarly defined in the reverse auction.\nThe interpretation is that the bidder``s cost in the (semi-open) quantity range [uj i , uj+1 i ) is pj i for each unit.\nAdditionally, it is assumed that the cost is \u221e for quantities less than u1 i as well as for quantities more than um i .\nEquivalently, the unit prices in the ranges [0, u1 i ) and (um i , \u221e) are infinity.\nWe interpret the bid list as defining a price function, pask,i(q) = qpj i , if uj i \u2264 q < uj+1 i .\n2.2 VCG-Based Multi-Unit Auctions We construct the tractable and approximately-strategyproof multiunit auctions around a VCG mechanism.\nWe assume that all agents have quasilinear utility functions; that is, ui(q, p) = vi(q)\u2212 p, for a buyer i with valuation vi(q) for q units at price p, and ui(q, p) = p \u2212 ci(q) for a seller i with cost ci(q) at price p.\nThis is a standard assumption in the auction literature, equivalent to assuming risk-neutral agents [13].\nWe will use the term payoff interchangeably for utility.\nIn the forward auction, there is a seller with M units to sell.\nWe assume that this seller has no intrinsic value for the items.\nGiven a set of bids from I agents, let V (I) denote the maximal revenue to the seller, given that at most one point on the bid curve can be selected from each agent and no more than M units of the item can be sold.\nLet x\u2217 = (x\u2217 1, ... , x\u2217 N ) denote the solution to this winner- determination problem, where x\u2217 i is the number of units sold to agent i. Similarly, let V (I \\ i) denote the maximal revenue to the seller without bids from agent i.\nThe VCG mechanism is defined as follows: 1.\nReceive piecewise-constant bid curves and capacity constraints from all the buyers.\n2.\nImplement the outcome x\u2217 that solves the winner-determination problem with all buyers.\n3.\nCollect payment pvcg,i = pbid,i(x\u2217 i ) \u2212 [V (I) \u2212 V (I \\ i)] from each buyer, and pass the payments to the seller.\nIn this forward auction, the VCG mechanism is strategyproof for buyers, which means that truthful bidding is a dominant strategy, i.e. utility maximizing whatever the bids of other buyers.\nIn addition, the VCG mechanism is allocatively-efficient, and the payments from each buyer are always positive.3 Moreover, each buyer pays less than its value, and receives payoff V (I)\u2212V (I \\ i) in equilibrium; this is precisely the marginal-value that buyer i contributes to the economic efficiency of the system.\nIn the reverse auction, there is a buyer with M units to buy, and n suppliers.\nWe assume that the buyer has value V > 0 to purchase all M units, but zero value otherwise.\nTo simplify the mechanism design problem we assume that the buyer will truthfully announce this value to the mechanism.4 The winner3 In fact, the VCG mechanism maximizes the expected payoff to the seller across all efficient mechanisms, even allowing for Bayesian-Nash implementations [14].\n4 Without this assumption, the Myerson-Satterthwaite [17] impossibility result would already imply that we should not expect an efficient trading mechanism in this setting.\ndetermination problem in the reverse auction is to determine the allocation, x\u2217 , that minimizes the cost to the buyer, or forfeits trade if the minimal cost is greater than value, V .\nLet C(I) denote the minimal cost given bids from all sellers, and let C(I \\i) denote the minimal cost without bids from seller i.\nWe can assume, without loss of generality, that there is an efficient trade and V \u2265 C(I).\nOtherwise, then the efficient outcome is no trade, and the outcome of the VCG mechanism is no trade and no payments.\nThe VCG mechanism implements the outcome x\u2217 that minimizes cost based on bids from all sellers, and then provides payment pvcg,i = pask,i(x\u2217 i )+[V \u2212C(I)\u2212max(0, V \u2212C(I\\i))] to each seller.\nThe total payment is collected from the buyer.\nAgain, in equilibrium each seller``s payoff is exactly the marginal-value that the seller contributes to the economic efficiency of the system; in the simple case that V \u2265 C(I \\ i) for all sellers i, this is precisely C(I \\ i) \u2212 C(I).\nAlthough the VCG mechanism remains strategyproof for sellers in the reverse direction, its applicability is limited to cases in which the total payments to the sellers are less than the buyer``s value.\nOtherwise, there will be instances in which the buyer will not choose to voluntarily participate in the mechanism, based on its own value and its beliefs about the costs of sellers.\nThis leads to a loss in efficiency when the buyer chooses not to participate, because efficient trades are missed.\nThis problem with the size of the payments, does not occur in simple single-item reverse auctions, or even in multi-unit reverse auctions with a buyer that has a constant marginal-valuation for each additional item that she procures.5 Intuitively, the problem occurs in the reverse multi-unit setting because the buyer demands a fixed number of items, and has zero value without them.\nThis leads to the possibility of the trade being contingent on the presence of particular, so-called pivotal sellers.\nDefine a seller i as pivotal, if C(I) \u2264 V but C(I\\i) > V .\nIn words, there would be no efficient trade without the seller.\nAny time there is a pivotal seller, the VCG payments to that seller allow her to extract all of the surplus, and the payments are too large to sustain with the buyer``s value unless this is the only winning seller.\nConcretely, we have this participation problem in the reverse auction when the total payoff to the sellers, in equilibrium, exceeds the total payoff from the efficient allocation: V \u2212 C(I) \u2265 i [V \u2212 C(I) \u2212 max(0, V \u2212 C(I \\ i))] As stated above, first notice that we require V > C(I \\ i) for all sellers i.\nIn other words, there must be no pivotal sellers.\nGiven this, it is then necessary and sufficient that: V \u2212 C(I) \u2265 i (C(I \\ i) \u2212 C(I)) (1) 5 To make the reverse auction symmetric with the forward direction, we would need a buyer with a constant marginal-value to buy the first M units, and zero value for additional units.\nThe payments to the sellers would never exceed the buyer``s value in this case.\nConversely, to make the forward auction symmetric with the reverse auction, we would need a seller with a constant (and high) marginal-cost to sell anything less than the first M units, and then a low (or zero) marginal cost.\nThe total payments received by the seller can be less than the seller``s cost for the outcome in this case.\n168 In words, the surplus of the efficient allocation must be greater than the total marginal-surplus provided by each seller.6 Consider an example with 3 agents {1, 2, 3}, and V = 150 and C(123) = 50.\nCondition (1) holds when C(12) = C(23) = 70 and C(13) = 100, but not when C(12) = C(23) = 80 and C(13) = 100.\nIn the first case, the agent payoffs \u03c0 = (\u03c00, \u03c01, \u03c02, \u03c03), where 0 is the seller, is (10, 20, 50, 20).\nIn the second case, the payoffs are \u03c0 = (\u221210, 30, 50, 30).\nOne thing we do know, because the VCG mechanism will maximize the payoff to the buyer across all efficient mechanisms [14], is that whenever Eq.\n1 is not satisfied there can be no efficient auction mechanism.7 2.3 \u03b5-Strategyproofness We now consider the same VCG mechanism, but with an approximation scheme for the underlying allocation problem.\nWe derive an \u03b5-strategyproofness result, that bounds the maximal gain in payoff that an agent can expect to achieve through a unilateral deviation from following a simple truth-revealing strategy.\nWe describe the result for the forward auction direction, but it is quite a general observation.\nAs before, let V (I) denote the value of the optimal solution to the allocation problem with truthful bids from all agents, and V (I \\i) denote the value of the optimal solution computed without bids from agent i. Let \u02c6V (I) and \u02c6V (I \\ i) denote the value of the allocation computed with an approximation scheme, and assume that the approximation satisfies: (1 + ) \u02c6V (I) \u2265 V (I) for some > 0.\nWe provide such an approximation scheme for our setting later in the paper.\nLet \u02c6x denote the allocation implemented by the approximation scheme.\nThe payoff to agent i, for announcing valuation \u02c6vi, is: vi(\u02c6xi) + j=i \u02c6vj (\u02c6xj) \u2212 \u02c6V (I \\ i) The final term is independent of the agent``s announced value, and can be ignored in an incentive-analysis.\nHowever, agent i can try to improve its payoff through the effect of its announced value on the allocation \u02c6x implemented by the mechanism.\nIn particular, agent i wants the mechanism to select \u02c6x to maximize the sum of its true value, vi(\u02c6xi), and the reported value of the other agents, \u00c8j=i \u02c6vj (\u02c6xj).\nIf the mechanism``s allocation algorithm is optimal, then all the agent needs to do is truthfully state its value and the mechanism will do the rest.\nHowever, faced with an approximate allocation algorithm, the agent can try to improve its payoff by announcing a value that corrects for the approximation, and causes the approximation algorithm to implement the allocation that exactly maximizes the total reported value of the other agents together with its own actual value [18].\n6 This condition is implied by the agents are substitutes requirement [3], that has received some attention in the combinatorial auction literature because it characterizes the case in which VCG payments can be supported in a competitive equilibrium.\nUseful characterizations of conditions that satisfy agents are substitutes, in terms of the underlying valuations of agents have proved quite elusive.\n7 Moreover, although there is a small literature on maximallyefficient mechanisms subject to requirements of voluntaryparticipation and budget-balance (i.e. with the mechanism neither introducing or removing money), analytic results are only known for simple problems (e.g. [16, 4]).\nWe can now analyze the best possible gain from manipulation to an agent in our setting.\nWe first assume that the other agents are truthful, and then relax this.\nIn both cases, the maximal benefit to agent i occurs when the initial approximation is worst-case.\nWith truthful reports from other agents, this occurs when the value of choice \u02c6x is V (I)/(1 + \u03b5).\nThen, an agent could hope to receive an improved payoff of: V (I) \u2212 V (I) 1 + \u03b5 = \u03b5 1 + \u03b5 V (I) This is possible if the agent is able to select a reported type to correct the approximation algorithm, and make the algorithm implement the allocation with value V (I).\nThus, if other agents are truthful, and with a (1 + \u03b5)-approximation scheme to the allocation problem, then no agent can improve its payoff by more than a factor \u03b5/(1 + \u03b5) of the value of the optimal solution.\nThe analysis is very similar when the other agents are not truthful.\nIn this case, an individual agent can improve its payoff by no more than a factor /(1 + ) of the value of the optimal solution given the values reported by the other agents.\nLet V in the following theorem define the total value of the efficient allocation, given the reported values of agents j = i, and the true value of agent i. THEOREM 1.\nA VCG-based mechanism with a (1 + \u03b5)allocation algorithm is (1+ \u2212V ) strategyproof for agent i, and agent i can gain at most this payoff through some non-truthful strategy.\nNotice that we did not need to bound the error on the allocation problems without each agent, because the -strategyproofness result follows from the accuracy of the first-term in the VCG payment and is independent of the accuracy of the second-term.\nHowever, the accuracy of the solution to the problem without each agent is important to implement a good approximation to the revenue properties of the VCG mechanism.\n3.\nTHEGENERALIZED KNAPSACK PROBLEM In this section, we design a fully polynomial approximation scheme for the generalized knapsack, which models the winnerdetermination problem for the VCG-based multi-unit auctions.\nWe describe our results for the reverse auction variation, but the formulation is completely symmetric for the forward-auction.\nIn describing our approximation scheme, we begin with a simple property (the Anchor property) of an optimal knapsack solution.\nWe use this property to develop an O(n2 ) time 2-approximation for the generalized knapsack.\nIn turn, we use this basic approximation to develop our fully polynomial-time approximation scheme (FPTAS).\nOne of the major appeals of our piecewise bidding language is its compact representation of the bidder``s valuation functions.\nWe strive to preserve this, and present an approximation scheme that will depend only on the number of bidders, and not the maximum quantity, M, which can be very large in realistic procurement settings.\nThe FPTAS implements an (1 + \u03b5) approximation to the optimal solution x\u2217 , in worst-case time T = O(n3 /\u03b5), where n is the number of bidders, and where we assume that the piecewise bid for each bidder has O(1) pieces.\nThe dependence on the number of pieces is also polynomial: if each bid has a maximum 169 of c pieces, then the running time can be derived by substituting nc for each occurrence of n. 3.1 Preliminaries Before we begin, let us recall the classic 0/1 knapsack problem: we are given a set of n items, where the item i has value vi and size si, and a knapsack of capacity M; all sizes are integers.\nThe goal is to determine a subset of items of maximum value with total size at most M.\nSince we want to focus on a reverse auction, the equivalent knapsack problem will be to choose a set of items with minimum value (i.e. cost) whose size exceeds M.\nThe generalized knapsack problem of interest to us can be defined as follows: Generalized Knapsack: Instance: A target M, and a set of n lists, where the ith list has the form Bi = (u1 i , p1 i ), ... , (umi\u22121 i , pmi\u22121 i ), (umi i (i), \u221e) , where uj i are increasing with j and pj i are decreasing with j, and uj i , pj i , M are positive integers.\nProblem: Determine a set of integers xj i such that 1.\n(One per list) At most one xj i is non-zero for any i, 2.\n(Membership) xj i = 0 implies xj i \u2208 [uj i , uj+1 i ), 3.\n(Target) \u00c8i \u00c8j xj i \u2265 M, and 4.\n(Objective) \u00c8i \u00c8j pj i xj i is minimized.\nThis generalized knapsack formulation is a clear generalization of the classic 0/1 knapsack.\nIn the latter, each list consists of a single point (si, vi).8 The connection between the generalized knapsack and our auction problem is transparent.\nEach list encodes a bid, representing multiple mutually exclusive quantity intervals, and one can choose any quantity in an interval, but at most one interval can be selected.\nChoosing interval [uj i , uj+1 i ) has cost pj i per unit.\nThe goal is to procure at least M units of the good at minimum possible cost.\nThe problem has some flavor of the continuous knapsack problem.\nHowever, there are two major differences that make our problem significantly more difficult: (1) intervals have boundaries, and so to choose interval [uj i , uj+1 i ) requires that at least uj i and at most uj+1 i units must be taken; (2) unlike the classic knapsack, we cannot sort the items (bids) by value/size, since different intervals in one list have different unit costs.\n3.2 A 2-Approximation Scheme We begin with a definition.\nGiven an instance of the generalized knapsack, we call each tuple tj i = (uj i , pj i ) an anchor.\nRecall that these tuples represent the breakpoints in the piecewise constant curve bids.\nWe say that the size of an anchor tj i is uj i , 8 In fact, because of the one per list constraint, the generalized problem is closer in spirit to the multiple choice knapsack problem [9], where the underling set of items is partitioned into disjoint subsets U1, U2, ... , Uk, and one can choose at most one item from each subset.\nPTAS do exist for this problem [10], and indeed, one can convert our problem into a huge instance of the multiple choice knapsack problem, by creating one group for each list; put a (quantity, price) point tuple (x, p) for each possible quantity for a bidder into his group (subset).\nHowever, this conversion explodes the problem size, making it infeasible for all but the most trivial instances.\nthe minimum number of units available at this anchor``s price pj i .\nThe cost of the anchor tj i is defined to be the minimum total price associated with this tuple, namely, cost(tj i ) = pj i uj i if j < mi, and cost(tmi i ) = pmi\u22121 i umi i .\nIn a feasible solution {x1, x2, ... , xn} of the generalized knapsack, we say that an element xi = 0 is an anchor if xi = uj i , for some anchor uj i .\nOtherwise, we say that xi is midrange.\nWe observe that an optimal knapsack solution can always be constructed so that at most one solution element is midrange.\nIf there are two midrange elements x and x , for bids from two different agents, with x \u2264 x , then we can increment x and decrement x, until one of them becomes an anchor.\nSee Figure 2 for an example.\nLEMMA 1.\n[Anchor Property] There exists an optimal solution of the generalized knapsack problem with at most one midrange element.\nAll other elements are anchors.\n1 midrange bid 5 20 15 10 25 5 25\u00a030201510 35 3 2 1 Price Quantity 5 20 15 10 25 5 25\u00a030201510 35 3 2 1 Price Quantity (i) Optimal solution with 2 midrange bids (ii) Optimal soltution with Figure 2: (i) An optimal solution with more than one bid not anchored (2,3); (ii) an optimal solution with only one bid (3) not anchored.\nWe use the anchor property to first obtain a polynomial-time 2-approximation scheme.\nWe do this by solving several instances of a restricted generalized-knapsack problem, which we call iKnapsack, where one element is forced to be midrange for a particular interval.\nSpecifically, suppose element x for agent l is forced to lie in its jth range, [uj , uj+1 ), while all other elements, x1, ... , xl\u22121, xl+1, xn, are required to be anchors, or zero.\nThis corresponds to the restricted problem iKnapsack( , j), in which the goal is to obtain at least M \u2212 uj units with minimum cost.\nElement x is assumed to have already contributed uj units.\nThe value of a solution to iKnapsack( , j) represents the minimal additional cost to purchase the rest of the units.\nWe create n \u2212 1 groups of potential anchors, where ith group contains all the anchors of the list i in the generalized knapsack.\nThe group for agent l contains a single element that represents the interval [0, uj+1 \u2212uj ), and the associated unit-price pj .\nThis interval represents the excess number of units that can be taken from agent l in iKnapsack( , j), in addition to uj , which has already been committed.\nIn any other group, we can choose at most one anchor.\nThe following pseudo-code describes our algorithm for this restriction of the generalized knapsack problem.\nU is the union of all the tuples in n groups, including a tuple t for agent l.\nThe size of this special tuple is defined as uj+1 \u2212 uj , and the cost is defined as pj l (uj+1 \u2212uj ).\nR is the number of units that remain to be acquired.\nS is the set of tuples accepted in the current tentative 170 solution.\nBest is the best solution found so far.\nVariable Skip is only used in the proof of correctness.\nAlgorithm Greedy( , j) 1.\nSort all tuples of U in the ascending order of unit price; in case of ties, sort in ascending order of unit quantities.\n2.\nSet mark(i) = 0, for all lists i = 1, 2, ... , n. Initialize R = M \u2212 uj , S = Best = Skip = \u2205.\n3.\nScan the tuples in U in the sorted order.\nSuppose the next tuple is tk i , i.e. the kth anchor from agent i.\nIf mark(i) = 1, ignore this tuple; otherwise do the following steps: \u2022 if size(tk i ) > R and i = return min {cost(S) + Rpj , cost(Best)}; \u2022 if size(tk i ) > R and cost(tk i ) \u2264 cost(S) return min {cost(S) + cost(tk i ), cost(Best)}; \u2022 if size(tk i ) > R and cost(tk i ) > cost(S) Add tk i to Skip; Set Best to S \u222a {tk i } if cost improves; \u2022 if size(tk i ) \u2264 R then add tk i to S; mark(i) = 1; subtract size(tk i ) from R.\nThe approximation algorithm is very similar to the approximation algorithm for knapsack.\nSince we wish to minimize the total cost, we consider the tuples in order of increasing per unit cost.\nIf the size of tuple tk i is smaller than R, then we add it to S, update R, and delete from U all the tuples that belong to the same group as tk i .\nIf size(tk i ) is greater than R, then S along with tk i forms a feasible solution.\nHowever, this solution can be far from optimal if the size of tk i is much larger than R.\nIf total cost of S and tk i is smaller than the current best solution, we update Best.\nOne exception to this rule is the tuple t .\nSince this tuple can be taken fractionally, we update Best if the sum of S``s cost and fractional cost of t is an improvement.\nThe algorithm terminates in either of the first two cases, or when all tuples are scanned.\nIn particular, it terminates whenever we find a tk i such that size(tk i ) is greater than R but cost(tk i ) is less than cost(S), or when we reach the tuple representing agent l and it gives a feasible solution.\nLEMMA 2.\nSuppose A\u2217 is an optimal solution of the generalized knapsack, and suppose that element (l, j) is midrange in the optimal solution.\nThen, the cost V (l, j), returned by Greedy( , j), satisfies: V ( , j) + cost(tj ) \u2264 2cost(A\u2217 ) PROOF.\nLet V ( , j) be the value returned by Greedy( , j) and let V \u2217 ( , j) be an optimal solution for iKnapsack( , j).\nConsider the set Skip at the termination of Greedy( , j).\nThere are two cases to consider: either some tuple t \u2208 Skip is also in V \u2217 ( , j), or no tuple in Skip is in V \u2217 ( , j).\nIn the first case, let St be the tentative solution S at the time t was added to Skip.\nBecause t \u2208 Skip then size(t) > R, and St together with t forms a feasible solution, and we have: V ( , j) \u2264 cost(Best) \u2264 cost(St) + cost(t).\nAgain, because t \u2208 Skip then cost(t) > cost(St), and we have V ( , j) < 2cost(t).\nOn the other hand, since t is included in V \u2217 ( , j), we have V \u2217 ( , j) \u2265 cost(t).\nThese two inequalities imply the desired bound: V \u2217 ( , j) \u2264 V ( , j) < 2V \u2217 ( , j).\nIn the second case, imagine a modified instance of iKnapsack( , j), which excludes all the tuples of the set Skip.\nSince none of these tuples were included in V \u2217 ( , j), the optimal solution for the modified problem should be the same as the one for the original.\nSuppose our approximation algorithm returns the value V ( , j) for this modified instance.\nLet t be the last tuple considered by the approximation algorithm before termination on the modified instance, and let St be the corresponding tentative solution set in that step.\nSince we consider tuples in order of increasing per unit price, and none of the tuples are going to be placed in the set Skip, we must have cost(St ) < V \u2217 ( , j) because St is the optimal way to obtain size(St ).\nWe also have cost(t ) \u2264 cost(St ), and the following inequalities: V ( , j) \u2264 V ( , j) \u2264 cost(St ) + cost(t ) < 2V \u2217 ( , j) The inequality V ( , j) \u2264 V ( , j) follows from the fact that a tuple in the Skip list can only affect the Best but not the tentative solutions.\nTherefore, dropping the tuples in the set Skip can only make the solution worse.\nThe above argument has shown that the value returned by Greedy( , j) is within a factor 2 of the optimal solution for iKnapsack( , j).\nWe now show that the value V ( , j) plus cost(tj ) is a 2-approximation of the original generalized knapsack problem.\nLet A\u2217 be an optimal solution of the generalized knapsack, and suppose that element xj is midrange.\nLet x\u2212 to be set of the remaining elements, either zero or anchors, in this solution.\nFurthermore, define x = xj \u2212 uj .\nThus, cost(A\u2217 ) = cost(xl) + cost(tj l ) + cost(x\u2212l) It is easy to see that (x\u2212 , x ) is an optimal solution for iKnapsack( , j).\nSince V ( , j) is a 2-approximation for this optimal solution, we have the following inequalities: V ( , j) + cost(tj ) \u2264 cost(tj ) + 2(cost(x ) + cost(x\u2212 )) \u2264 2(cost(x ) + cost(tj ) + cost(x\u2212 )) \u2264 2cost(A\u2217 ) This completes the proof of Lemma 2.\nIt is easy to see that, after an initial sorting of the tuples in U, the algorithm Greedy( , j) takes O(n) time.\nWe have our first polynomial approximation algorithm.\nTHEOREM 2.\nA 2-approximation of the generalized knapsack problem can be found in time O(n2 ), where n is number of item lists (each of constant length).\nPROOF.\nWe run the algorithm Greedy( , j) once for each tuple (l, j) as a candidate for midrange.\nThere are O(n) tuples, and it suffices to sort them once, the total cost of the algorithm is O(n2 ).\nBy Lemma 1, there is an optimal solution with at most one midrange element, so our algorithm will find a 2-approximation, as claimed.\nThe dependence on the number of pieces is also polynomial: if each bid has a maximum of c pieces, then the running time is O((nc)2 ).\n171 3.3 An Approximation Scheme We now use the 2-approximation algorithm presented in the preceding section to develop a fully polynomial approximation (FPTAS) for the generalized knapsack problem.\nThe high level idea is fairly standard, but the details require technical care.\nWe use a dynamic programming algorithm to solve iKnapsack( , j) for each possible midrange element, with the 2-approximation algorithm providing an upper bound on the value of the solution and enabling the use of scaling on the cost dimension of the dynamic programming (DP) table.\nConsider, for example, the case that the midrange element is x , which falls in the range [uj , uj+1 ).\nIn our FPTAS, rather than using a greedy approximation algorithm to solve iKnapsack( , j), we construct a dynamic programming table to compute the minimum cost at which at least M \u2212 uj+1 units can be obtained using the remaining n \u2212 1 lists in the generalized knapsack.\nSuppose G[i, r] denotes the maximum number of units that can be obtained at cost at most r using only the first i lists in the generalized knapsack.\nThen, the following recurrence relation describes how to construct the dynamic programming table: G[0, r] = 0 G[i, r] = max \u00b4 G[i \u2212 1, r] max j\u2208\u03b2(i,r) {G[i \u2212 1, r \u2212 cost(tj i )] + uj i } \u00b5 where \u03b2(i, r) = {j : 1 \u2264 j \u2264 mi, cost(tj i ) \u2264 r}, is the set of anchors for agent i.\nAs convention, agent i will index the row, and cost r will index the column.\nThis dynamic programming algorithm is only pseudo-polynomial, since the number of column in the dynamic programming table depends upon the total cost.\nHowever, we can convert it into a FPTAS by scaling the cost dimension.\nLet A denote the 2-approximation to the generalized knapsack problem, with total cost, cost(A).\nLet \u03b5 denote the desired approximation factor.\nWe compute the scaled cost of a tuple tj i , denoted scost(tj i ), as scost(tj i ) = n cost(tj i ) \u03b5cost(A) (2) This scaling improves the running time of the algorithm because the number of columns in the modified table is at most n \u03b5 , and independent of the total cost.\nHowever, the computed solution might not be an optimal solution for the original problem.\nWe show that the error introduced is within a factor of \u03b5 of the optimal solution.\nAs a prelude to our approximation guarantee, we first show that if two different solutions to the iKnapsack problem have equal scaled cost, then their original (unscaled) costs cannot differ by more than \u03b5cost(A).\nLEMMA 3.\nLet x and y be two distinct feasible solutions of iKnapsack( , j), excluding their midrange elements.\nIf x and y have equal scaled costs, then their unscaled costs cannot differ by more than \u03b5cost(A).\nPROOF.\nLet Ix and Iy, respectively, denote the indicator functions associated with the anchor vectors x and y-there is 1 in position Ix[i, k] if the xk i > 0.\nSince x and y has equal scaled cost, i= k scost(tk i )Ix[i, k] = i= k scost(tk i )Iy[i, k] (3) However, by (2), the scaled costs satisfy the following inequalities: (scost(tk i ) \u2212 1)\u03b5cost(A) n \u2264 cost(tk i ) \u2264 scost(tk i )\u03b5cost(A) n (4) Substituting the upper-bound on scaled cost from (4) for cost(x), the lower-bound on scaled cost from (4) for cost(y), and using equality (3) to simplify, we have: cost(x) \u2212 cost(y) \u2264 \u03b5cost(A) n i= k Iy[i, k] \u2264 \u03b5cost(A), The last inequality uses the fact that at most n components of an indicator vector are non-zero; that is, any feasible solution contains at most n tuples.\nFinally, given the dynamic programming table for iKnapsack( , j), we consider all the entries in the last row of this table, G[n\u22121, r].\nThese entries correspond to optimal solutions with all agents except l, for different levels of cost.\nIn particular, we consider the entries that provide at least M \u2212 uj+1 units.\nTogether with a contribution from agent l, we choose the entry in this set that minimizes the total cost, defined as follows: cost(G[n \u2212 1, r]) + max {uj , M \u2212 G[n \u2212 1, r]}pj , where cost() is the original, unscaled cost associated with entry G[n\u22121, r].\nIt is worth noting, that unlike the 2-approximation scheme for iKnapsack( , j), the value computed with this FPTAS includes the cost to acquire uj l units from l.\nThe following lemma shows that we achieve a (1+\u03b5)-approximation.\nLEMMA 4.\nSuppose A\u2217 is an optimal solution of the generalized knapsack problem, and suppose that element (l, j) is midrange in the optimal solution.\nThen, the solution A(l, j) from running the scaled dynamic-programming algorithm on iKnapsack( , j) satisfies cost(A(l, j)) \u2264 (1 + 2\u03b5)cost(A\u2217 ) PROOF.\nLet x\u2212 denote the vector of the elements in solution A\u2217 without element l. Then, by definition, cost(A\u2217 ) = cost(x\u2212 ) + pj xj .\nLet r = scost(x\u2212 ) be the scaled cost associated with the vector x\u2212 .\nNow consider the dynamic programming table constructed for iKnapsack( , j), and consider its entry G[n \u2212 1, r].\nLet A denote the 2-approximation to the generalized knapsack problem, and A(l, j) denote the solution from the dynamic-programming algorithm.\nSuppose y\u2212 is the solution associated with this entry in our dynamic program; the components of the vector y\u2212 are the quantities from different lists.\nSince both x\u2212 and y\u2212 have equal scaled costs, by Lemma 3, their unscaled costs are within \u03b5cost(A) of each other; that is, cost(y\u2212 ) \u2212 cost(x\u2212 ) \u2264 \u03b5cost(A).\nNow, define yj = max{uj , M \u2212 \u00c8i= \u00c8j yj i }; this is the contribution needed from to make (y\u2212 , yj ) a feasible solution.\nAmong all the equal cost solutions, our dynamic programming tables chooses the one with maximum units.\nTherefore, i= j yj i \u2265 i= j xj i 172 Therefore, it must be the case that yj \u2264 xj .\nBecause (yj , y\u2212 ) is also a feasible solution, if our algorithm returns a solution with cost cost(A(l, j)), then we must have cost(A(l, j)) \u2264 cost(y\u2212 ) + pj yj \u2264 cost(x\u2212 ) + \u03b5cost(A) + pj xj \u2264 (1 + 2\u03b5)cost(A\u2217 ), where we use the fact that cost(A) \u2264 2cost(A\u2217 ).\nPutting this together, our approximation scheme for the generalized knapsack problem will iterate the scheme described above for each choice of the midrange element (l, j), and choose the best solution from among these O(n) solutions.\nFor a given midrange, the most expensive step in the algorithm is the construction of dynamic programming table, which can be done in O(n2 /\u03b5) time assuming constant intervals per list.\nThus, we have the following result.\nTHEOREM 3.\nWe can compute an (1 + \u03b5) approximation to the solution of a generalized knapsack problem in worst-case time O(n3 /\u03b5).\nThe dependence on the number of pieces is also polynomial: if each bid has a maximum of c pieces, then the running time can be derived by substituting cn for each occurrence of n. 4.\nCOMPUTING VCG PAYMENTS We now consider the related problem of computing the VCG payments for all the agents.\nA naive approach requires solving the allocation problem n times, removing each agent in turn.\nIn this section, we show that our approximation scheme for the generalized knapsack can be extended to determine all n payments in total time O(\u03b1T log(\u03b1n/\u03b5)), where 1 \u2264 C(I\\i)/C(I) \u2264 \u03b1, for a constant upper bound, \u03b1, and T is the complexity of solving the allocation problem once.\nThis \u03b1-bound can be justified as a no monopoly condition, because it bounds the marginal value that a single buyer brings to the auction.\nSimilarly, in the reverse variation we can compute the VCG payments to each seller in time O(\u03b1T log(\u03b1n/\u03b5)), where \u03b1 bounds the ratio C(I\\ i)/C(I) for all i.\nOur overall strategy will be to build two dynamic programming tables, forward and backward, for each midrange element (l, j) once.\nThe forward table is built by considering the agents in the order of their indices, where as the backward table is built by considering them in the reverse order.\nThe optimal solution corresponding to C(I \\ i) can be broken into two parts: one corresponding to first (i \u2212 1) agents and the other corresponding to last (n \u2212 i) agents.\nAs the (i \u2212 1)th row of the forward table corresponds to the sellers with first (i\u22121) indices, an approximation to the first part will be contained in (i \u2212 1)th row of the forward table.\nSimilarly, (n\u2212 i)th row of the backward table will contain an approximation for the second part.\nWe first present a simple but an inefficient way of computing the approximate value of C(I \\ i), which illustrates the main idea of our algorithm.\nThen we present an improved scheme, which uses the fact that the elements in the rows are sorted, to compute the approximate value more efficiently.\nIn the following, we concentrate on computing an allocation with xj being midrange, and some agent i = l removed.\nThis will be a component in computing an approximation to C(I \\ i), the value of the solution to the generalized knapsack without bids from agent i.\nWe begin with the simple scheme.\n4.1 A Simple Approximation Scheme We implement the scaled dynamic programming algorithm for iKnapsack( , j) with two alternate orderings over the other sellers, k = l, one with sellers ordered 1, 2, ... , n, and one with sellers ordered n, n \u2212 1, ... , 1.\nWe call the first table the forward table, and denote it F , and the second table the backward table, and denote it Bl.\nThe subscript reminds us that the agent is midrange.9 In building these tables, we use the same scaling factor as before; namely, the cost of a tuple tj i is scaled as follows: scost(tj i ) = ncost(tj i ) \u03b5cost(A) where cost(A) is the upper bound on C(I), given by our 2approximation scheme.\nIn this case, because C(I \\ i) can be \u03b1 times C(I), the scaled value of C(I \\ i) can be at most n\u03b1/\u03b5.\nTherefore, the cost dimension of our dynamic program``s table will be n\u03b1/\u03b5.\nFlTable F (i\u22121)l 2 3 1 2 i\u22121 1 m\u22121 m n\u22121 g 2 31 m\u22121 m B (n\u2212i) n\u22121 n\u22122 n\u2212i 1 lh Table Bl Figure 3: Computing VCG payments.\nm = n\u03b1 \u03b5 Now, suppose we want to compute a (1 + )-approximation to the generalized knapsack problem restricted to element (l, j) midrange, and further restricted to remove bids from some seller i = l. Call this problem iKnapsack\u2212i ( , j).\nRecall that the ith row of our DP table stores the best solution possible using only the first i agents excluding agent l, all of them either cleared at zero, or on anchors.\nThese first i agents are a different subset of agents in the forward and the backward tables.\nBy carefully combining one row of Fl with one row of Bl we can compute an approximation to iKnapsack\u2212i ( , j).\nWe consider the row of Fl that corresponds to solutions constructed from agents {1, 2, ... , i \u2212 1}, skipping agent l.\nWe consider the row of Bl that corresponds to solutions constructed from agents {i+1, i+2, ... , n}, again skipping agent l.\nThe rows are labeled Fl(i \u2212 1) and Bl(n \u2212 i) respectively.10 The scaled costs for acquiring these units are the column indices for these entries.\nTo solve iKnapsack\u2212i ( , j) we choose one entry from row F (i\u22121) and one from row B (n\u2212i) such that their total quantity exceeds M \u2212 uj+1 and their combined cost is minimum over all such combinations.\nFormally, let g \u2208 Fl(i \u2212 1), and h \u2208 Bl(n \u2212 1) denote entries in each row, with size(g), size(h), denoting the number of units and cost(g) and cost(h) denoting the unscaled cost associated with the entry.\nWe compute the following, subject 9 We could label the tables with both and j, to indicate the jth tuple is forced to be midrange, but omit j to avoid clutter.\n10 To be precise, the index of the rows are (i \u2212 2) and (n \u2212 i) for Fl and Bl when l < i, and (i \u2212 1) and (n \u2212 i \u2212 1), respectively, when l > i. 173 to the condition that g and h satisfy size(g) + size(h) > M \u2212 uj+1 : min g\u2208F (i\u22121),h\u2208B (n\u2212i) \u00d2cost(g) + cost(h) + pj \u00b7 max{uj , M \u2212 size(g) \u2212 size(h)} \u00d3 (5) LEMMA 5.\nSuppose A\u2212i is an optimal solution of the generalized knapsack problem without bids from agent i, and suppose that element (l, j) is the midrange element in the optimal solution.\nThen, the expression in Eq.\n5, for the restricted problem iKnapsack\u2212i ( , j), computes a (1 + \u03b5)-approximation to A\u2212i .\nPROOF.\nFrom earlier, we define cost(A\u2212i ) = C(I \\ i).\nWe can split the optimal solution, A\u2212i , into three disjoint parts: xl corresponds to the midrange seller, xi corresponds to first i \u2212 1 sellers (skipping agent l if l < i), and x\u2212i corresponds to last n \u2212 i sellers (skipping agent l if l > i).\nWe have: cost(A\u2212i ) = cost(xi) + cost(x\u2212i) + pj xj Let ri = scost(xi) and r\u2212i = scost(x\u2212i).\nLet yi and y\u2212i be the solution vectors corresponding to scaled cost ri and r\u2212i in F (i \u2212 1) and B (n \u2212 i), respectively.\nFrom Lemma 3 we conclude that, cost(yi) + cost(y\u2212i) \u2212 cost(xi) \u2212 cost(x\u2212i) \u2264 \u03b5cost(A) where cost(A) is the upper-bound on C(I) computed with the 2-approximation.\nAmong all equal scaled cost solutions, our dynamic program chooses the one with maximum units.\nTherefore we also have, (size(yi) \u2265 size(xi)) and (size(y\u2212i) \u2265 size(x\u2212i)) where we use shorthand size(x) to denote total number of units in all tuples in x. Now, define yj l = max(uj l , M \u2212size(yi)\u2212size(y\u2212i)).\nFrom the preceding inequalities, we have yj l \u2264 xj l .\nSince (yj l , yi, y\u2212i) is also a feasible solution to the generalized knapsack problem without agent i, the value returned by Eq.\n5 is at most cost(yi) + cost(y\u2212i) + pj l yj l \u2264 C(I \\ i) + \u03b5cost(A) \u2264 C(I \\ i) + 2cost(A\u2217 )\u03b5 \u2264 C(I \\ i) + 2C(I \\ i)\u03b5 This completes the proof.\nA naive implementation of this scheme will be inefficient because it might check (n\u03b1/\u03b5)2 pairs of elements, for any particular choice of (l, j) and choice of dropped agent i.\nIn the next section, we present an efficient way to compute Eq.\n5, and eventually to compute the VCG payments.\n4.2 Improved Approximation Scheme Our improved approximation scheme for the winner-determination problem without agent i uses the fact that elements in F (i \u2212 1) and B (n \u2212 i) are sorted; specifically, both, unscaled cost and quantity (i.e. size), increases from left to right.\nAs before, let g and h denote generic entries in F (i \u2212 1) and B (n \u2212 i) respectively.\nTo compute Eq.\n5, we consider all the tuple pairs, and first divide the tuples that satisfy condition size(g) + size(h) > M \u2212 uj+1 l into two disjoint sets.\nFor each set we compute the best solution, and then take the best between the two sets.\n[case I: size(g) + size(h) \u2265 M \u2212 uj l ] The problem reduces to min g\u2208F (i\u22121), h\u2208B (n\u2212i) \u00d2cost(g) + cost(h) + pj l uj \u00d3 (6) We define a pair (g, h) to be feasible if size(g) + size(h) \u2265 M \u2212 uj l .\nNow to compute Eq.\n6, we do a forward and backward walk on F (i \u2212 1) and B (n \u2212 i) respectively.\nWe start from the smallest index of F (i \u2212 1) and move right, and from the highest index of B (n \u2212 i) and move left.\nLet (g, h) be the current pair.\nIf (g, h) is feasible, we decrement B``s pointer (that is, move backward) otherwise we increment F``s pointer.\nThe feasible pairs found during the walk are used to compute Eq.\n6.\nThe complexity of this step is linear in size of F (i \u2212 1), which is O(n\u03b1/\u03b5).\n[case II: M \u2212 uj+1 l \u2264 size(g) + size(h) \u2264 M \u2212 uj l ] The problem reduces to min g\u2208F (i\u22121), h\u2208B (n\u2212i) \u00d2cost(g) + cost(h) + pj l (M \u2212 size(g) \u2212 size(h)) \u00d3 To compute the above equation, we transform the above problem to another problem using modified cost, which is defined as: mcost(g) = cost(g) \u2212 pj l \u00b7 size(g) mcost(h) = cost(h) \u2212 pj l \u00b7 size(h) The new problem is to compute min g\u2208F (i\u22121), h\u2208B (n\u2212i) \u00d2mcost(g) + mcost(h) + pj l M \u00d3 (7) The modified cost simplifies the problem, but unfortunately the elements in F (i \u2212 1) and B (n \u2212 i) are no longer sorted with respect to mcost.\nHowever, the elements are still sorted in quantity and we use this property to compute Eq.\n7.\nCall a pair (g, h) feasible if M \u2212 uj+1 l \u2264 size(g) + size(h) \u2264 M \u2212 uj l .\nDefine the feasible set of g as the elements h \u2208 B (n \u2212 i) that are feasible given g.\nAs the elements are sorted by quantity, the feasible set of g is a contiguous subset of B (n \u2212 i) and shifts left as g increases.\n2 3 4 5 10 20 30 40 50 60 Begin End B (n\u2212i)15 20 25 30 35 40 65421 3 1 6 F (i\u22121)l l Figure 4: The feasible set of g = 3, defined on B (n \u2212 i), is {2, 3, 4} when M \u2212 uj+1 l = 50 and M \u2212 uj l = 60.\nBegin and End represent the start and end pointers to the feasible set.\nTherefore, we can compute Eq.\n7 by doing a forward and backward walk on F (i \u2212 1) and B (n \u2212 i) respectively.\nWe walk on B (n \u2212 i), starting from the highest index, using two pointers, Begin and End, to indicate the start and end of the current feasible set.\nWe maintain the feasible set as a min heap, where the key is modified cost.\nTo update the feasible set, when we increment F``s pointer(move forward), we walk left on B, first using End to remove elements from feasible set which are no longer 174 feasible and then using Begin to add new feasible elements.\nFor a given g, the only element which we need to consider in g``s feasible set is the one with minimum modified cost which can be computed in constant time with the min heap.\nSo, the main complexity of the computation lies in heap updates.\nSince, any element is added or deleted at most once, there are O(n\u03b1 \u03b5 ) heap updates and the time complexity of this step is O(n\u03b1 \u03b5 log n\u03b1 \u03b5 ).\n4.3 Collecting the Pieces The algorithm works as follows.\nFirst, using the 2 approximation algorithm, we compute an upper bound on C(I).\nWe use this bound to scale down the tuple costs.\nUsing the scaled costs, we build the forward and backward tables corresponding to each tuple (l, j).\nThe forward tables are used to compute C(I).\nTo compute C(I \\ i), we iterate over all the possible midrange tuples and use the corresponding forward and backward tables to compute the locally optimal solution using the above scheme.\nAmong all the locally optimal solutions we choose one with the minimum total cost.\nThe most expensive step in the algorithm is computation of C(I \\ i).\nThe time complexity of this step is O(n2 \u03b1 \u03b5 log n\u03b1 \u03b5 ) as we have to iterate over all O(n) choices of tj l , for all l = i, and each time use the above scheme to compute Eq.\n5.\nIn the worst case, we might need to compute C(I \\ i) for all n sellers, in which case the final complexity of the algorithm will be O(n3 \u03b1 \u03b5 log n\u03b1 \u03b5 ).\nTHEOREM 4.\nWe can compute an /(1+ )-strategyproof approximation to the VCG mechanism in the forward and reverse multi-unit auctions in worst-case time O(n3 \u03b1 \u03b5 log n\u03b1 \u03b5 ).\nIt is interesting to recall that T = O(n3 \u03b5 ) is the time complexity of the FPTAS to the generalized knapsack problem with all agents.\nOur combined scheme computes an approximation to the complete VCG mechanism, including payments to O(n) agents, in time complexity O(T log(n/\u03b5)), taking the no-monopoly parameter, \u03b1, as a constant.\nThus, our algorithm performs much better than the naive scheme, which computes the VCG payment for each agent by solving a new instance of generalized knapsack problem.\nThe speed up comes from the way we solve iKnapsack\u2212i ( , j).\nTime complexity of computing iKnapsack\u2212i ( , j) by creating a new dynamic programming table will be O(n2 \u03b5 ) but by using the forward and backward tables, the complexity is reduced to O(n \u03b5 log n \u03b5 ).\nWe can further improve the time complexity of our algorithm by computing Eq.\n5 more efficiently.\nCurrently, the algorithm uses heap, which has logarithmic update time.\nIn worst case, we can have two heap update operations for each element, which makes the time complexity super linear.\nIf we can compute Eq.\n5 in linear time then the complexity of computing the VCG payment will be same as the complexity of solving a single generalized knapsack problem.\n5.\nCONCLUSIONS We presented a fully polynomial-time approximation scheme for the single-good multi-unit auction problem, using marginal decreasing piecewise constant bidding language.\nOur scheme is both approximately efficient and approximately strategyproof within any specified factor \u03b5 > 0.\nAs such it is an example of computationally tractable \u03b5-dominance result, as well as an example of a non-trivial but approximable allocation problem.\nIt is particularly interesting that we are able to compute the payments to n agents in a VCG-based mechanism in worst-case time O(T log n), where T is the time complexity to compute the solution to a single allocation problem.\n6.\nREFERENCES [1] L M Ausubel and P R Milgrom.\nAscending auctions with package bidding.\nFrontiers of Theoretical Economics, 1:1-42, 2002.\n[2] S Bikchandani, S de Vries, J Schummer, and R V Vohra.\nLinear programming and Vickrey auctions.\nTechnical report, Anderson Graduate School of Management, U.C.L.A., 2001.\n[3] S Bikchandani and J M Ostroy.\nThe package assignment model.\nJournal of Economic Theory, 2002.\nForthcoming.\n[4] K Chatterjee and W Samuelson.\nBargaining under incomplete information.\nOperations Research, 31:835-851, 1983.\n[5] E H Clarke.\nMultipart pricing of public goods.\nPublic Choice, 11:17-33, 1971.\n[6] S de Vries and R V Vohra.\nCombinatorial auctions: A survey.\nInforms Journal on Computing, 2002.\nForthcoming.\n[7] M Eso, S Ghosh, J R Kalagnanam, and L Ladanyi.\nBid evaluation in procurement auctions with piece-wise linear supply curves.\nTechnical report, IBM TJ Watson Research Center, 2001.\nin preparation.\n[8] J Feigenbaum and S Shenker.\nDistributed Algorithmic Mechanism Design: Recent Results and Future Directions.\nIn Proceedings of the 6th International Workshop on Discrete Algorithms and Methods for Mobile Computing and Communications, pages 1-13, 2002.\n[9] M R Garey and D S Johnson.\nComputers and Intractability: A Guide to the Theory of NP-Completeness.\nW.H.Freeman and Company, New York, 1979.\n[10] G V Gens and E V Levner.\nComputational complexity of approximation algorithms for combinatorial problems.\nIn Mathematical Foundation of Computer Science, 292-300, 1979.\n[11] T Groves.\nIncentives in teams.\nEconometrica, 41:617-631, 1973.\n[12] J R Kalagnanam, A J Davenport, and H S Lee.\nComputational aspects of clearing continuous call double auctions with assignment constraints and indivisible demand.\nElectronic Commerce Journal, 1(3):221-238, 2001.\n[13] V Krishna.\nAuction Theory.\nAcademic Press, 2002.\n[14] V Krishna and M Perry.\nEfficient mechanism design.\nTechnical report, Pennsylvania State University, 1998.\nAvailable at: http://econ.la.psu.edu/\u02dcvkrishna/vcg18.ps.\n[15] D Lehmann, L I O``Callaghan, and Y Shoham.\nTruth revelation in approximately efficient combinatorial auctions.\nJACM, 49(5):577-602, September 2002.\n[16] R B Myerson.\nOptimal auction design.\nMathematics of Operation Research, 6:58-73, 1981.\n[17] R B Myerson and M A Satterthwaite.\nEfficient mechanisms for bilateral trading.\nJournal of Economic Theory, 28:265-281, 1983.\n[18] N Nisan and A Ronen.\nComputationally feasible VCG mechanisms.\nIn ACM-EC, pages 242-252, 2000.\n[19] D C Parkes, J R Kalagnanam, and M Eso.\nAchieving budget-balance with Vickrey-based payment schemes in exchanges.\nIn IJCAI, 2001.\n[20] M H Rothkopf, A Peke\u02c7c, and R M Harstad.\nComputationally manageable combinatorial auctions.\nManagement Science, 44(8):1131-1147, 1998.\n[21] J Schummer.\nAlmost dominant strategy implementation.\nTechnical report, MEDS Department, Kellogg Graduate School of Management, 2001.\n[22] W Vickrey.\nCounterspeculation, auctions, and competitive sealed tenders.\nJournal of Finance, 16:8-37, 1961.\n175", "lvl-3": "Approximately-Strategyproof and Tractable Multi-Unit Auctions\nABSTRACT\nWe present an approximately-efficient and approximatelystrategyproof auction mechanism for a single-good multi-unit allocation problem .\nThe bidding language in our auctions allows marginal-decreasing piecewise constant curves .\nFirst , we develop a fully polynomial-time approximation scheme for the multi-unit allocation problem , which computes a ( 1 + e ) approximation in worst-case time T = O ( n3/e ) , given n bids each with a constant number of pieces .\nSecond , we embed this approximation scheme within a Vickrey-Clarke-Groves ( VCG ) mechanism and compute payments to n agents for an asymptotic cost of O ( T log n ) .\nThe maximal possible gain from manipulation to a bidder in the combined scheme is bounded by e / ( 1 + e ) V , where V is the total surplus in the efficient outcome .\n1 .\nINTRODUCTION\nIn this paper we present a fully polynomial-time approximation scheme for the single-good multi-unit auction problem .\nOur scheme is both approximately efficient and approximately strategyproof .\nThe auction settings considered in our paper are motivated by recent trends in electronic commerce ; for instance , corporations are increasingly using auctions for their strategic sourcing .\nWe consider both a reverse auction variation and a forward auction variation , and propose a compact and expressive bidding language that allows marginal-decreasing piecewise constant curves .\nIn the reverse auction , we consider a single buyer with a demand for M units of a good and n suppliers , each with a marginal-decreasing piecewise-constant cost function .\nIn addition , each supplier can also express an upper bound , or capacity constraint on the number of units she can supply .\nThe reverse variation models , for example , a procurement auction to obtain raw materials or other services ( e.g. circuit boards , power suppliers , toner cartridges ) , with flexible-sized lots .\nIn the forward auction , we consider a single seller with M units of a good and n buyers , each with a marginal-decreasing piecewise-constant valuation function .\nA buyer can also express a lower bound , or minimum lot size , on the number of units she demands .\nThe forward variation models , for example , an auction to sell excess inventory in flexible-sized lots .\nWe consider the computational complexity of implementing the Vickrey-Clarke-Groves [ 22 , 5 , 11 ] mechanism for the multiunit auction problem .\nThe Vickrey-Clarke-Groves ( VCG ) mechanism has a number of interesting economic properties in this setting , including strategyproofness , such that truthful bidding is a dominant strategy for buyers in the forward auction and sellers in the reverse auction , and allocative efficiency , such that the outcome maximizes the total surplus in the system .\nHowever , as we discuss in Section 2 , the application of the VCG-based approach is limited in the reverse direction to instances in which the total payments to the sellers are less than the value of the outcome to the buyer .\nOtherwise , either the auction must run at a loss in these instances , or the buyer can not be expected to voluntarily choose to participate .\nThis is an example of the budget-deficit problem that often occurs in efficient mechanism design [ 17 ] .\nThe computational problem is interesting , because even with marginal-decreasing bid curves , the underlying allocation problem turns out to ( weakly ) intractable .\nFor instance , the classic 0/1 knapsack is a special case of this problem .1 We model the 1However , the problem can be solved easily by a greedy scheme if we remove all capacity constraints from the seller and all\nallocation problem as a novel and interesting generalization of the classic knapsack problem , and develop a fully polynomialtime approximation scheme , computing a ( 1 + ~ ) - approximation in worst-case time T = O ( n3 / \u03b5 ) , where each bid has a fixed number of piecewise constant pieces .\nGiven this scheme , a straightforward computation of the VCG payments to all n agents requires time O ( nT ) .\nWe compute approximate VCG payments in worst-case time O ( \u03b1T log ( \u03b1n / \u03b5 ) ) , where \u03b1 is a constant that quantifies a reasonable `` no-monopoly '' assumption .\nSpecifically , in the reverse auction , suppose that C ( _ T ) is the minimal cost for procuring M units with all sellers _ T , and C ( _ T \\ i ) is the minimal cost without seller i. Then , the constant \u03b1 is defined as an upper bound for the ratio C ( _ T \\ i ) / C ( _ T ) , over all sellers i .\nThis upper-bound tends to 1 as the number of sellers increases .\nThe approximate VCG mechanism is ( \u03b5 1 + \u03b5 ) - strategyproof for an approximation to within ( 1 + ~ ) of the optimal allocation .\nThis means that a bidder can gain at most ( \u03b5 1 + \u03b5 ) V from a nontruthful bid , where V is the total surplus from the efficient allocation .\nAs such , this is an example of a computationally-tractable \u03b5-dominance result .2 In practice , we can have good confidence that bidders without good information about the bidding strategies of other participants will have little to gain from attempts at manipulation .\nSection 2 formally defines the forward and reverse auctions , and defines the VCG mechanisms .\nWe also prove our claims about \u03b5-strategyproofness .\nSection 3 provides the generalized knapsack formulation for the multi-unit allocation problems and introduces the fully polynomial time approximation scheme .\nSection 4 defines the approximation scheme for the payments in the VCG mechanism .\nSection 5 concludes .\n1.1 Related Work\nThere has been considerable interest in recent years in characterizing polynomial-time or approximable special cases of the general combinatorial allocation problem , in which there are multiple different items .\nThe combinatorial allocation problem ( CAP ) is both NP-complete and inapproximable ( e.g. [ 6 ] ) .\nAlthough some polynomial-time cases have been identified for the CAP [ 6 , 20 ] , introducing an expressive exclusive-or bidding language quickly breaks these special cases .\nWe identify a non-trivial but approximable allocation problem with an expressive exclusiveor bidding language -- the bid taker in our setting is allowed to accept at most one point on the bid curve .\nThe idea of using approximations within mechanisms , while retaining either full-strategyproofness or \u03b5-dominance has received some previous attention .\nFor instance , Lehmann et al. [ 15 ] propose a greedy and strategyproof approximation to a single-minded combinatorial auction problem .\nNisan & Ronen [ 18 ] discussed approximate VCG-based mechanisms , but either appealed to particular maximal-in-range approximations to retain full strategyproofness , or to resource-bounded agents with information or computational limitations on the ability to compute strategies .\nFeigenminimum-lot size constraints from the buyers .\n2However , this may not be an example of what Feigenbaum & Shenker refer to as a tolerably-manipulable mechanism [ 8 ] because we have not tried to bound the effect of such a manipulation on the efficiency of the outcome .\nVCG mechanism do have a natural `` self-correcting '' property , though , because a useful manipulation to an agent is a reported value that improves the total value of the allocation based on the reports of other agents and the agent 's own value .\nbaum & Shenker [ 8 ] have defined the concept of strategically faithful approximations , and proposed the study of approximations as an important direction for algorithmic mechanism design .\nSchummer [ 21 ] and Parkes et al [ 19 ] have previously considered \u03b5-dominance , in the context of economic impossibility results , for example in combinatorial exchanges .\nEso et al. [ 7 ] have studied a similar procurement problem , but for a different volume discount model .\nThis earlier work formulates the problem as a general mixed integer linear program , and gives some empirical results on simulated data .\nKalagnanam et al. [ 12 ] address double auctions , where multiple buyers and sellers trade a divisible good .\nThe focus of this paper is also different : it investigates the equilibrium prices using the demand and supply curves , whereas our focus is on efficient mechanism design .\nAusubel [ 1 ] has proposed an ascending-price multi-unit auction for buyers with marginal-decreasing values [ 1 ] , with an interpretation as a primal-dual algorithm [ 2 ] .\n2 .\nAPPROXIMATELY-STRATEGYPROOF VCG AUCTIONS\n2.1 Marginal-Decreasing Piecewise Bids\n2.2 VCG-Based Multi-Unit Auctions\n2.3 \u03b5-Strategyproofness\n3 .\nTHE GENERALIZED KNAPSACK PROBLEM\n3.1 Preliminaries\nGeneralized Knapsack :\n3.2 A 2-Approximation Scheme\n3.3 An Approximation Scheme\n4 .\nCOMPUTING VCG PAYMENTS\n4.1 A Simple Approximation Scheme\n4.2 Improved Approximation Scheme\n4.3 Collecting the Pieces\n5 .\nCONCLUSIONS\nWe presented a fully polynomial-time approximation scheme for the single-good multi-unit auction problem , using marginal decreasing piecewise constant bidding language .\nOur scheme is both approximately efficient and approximately strategyproof within any specified factor \u03b5 > 0 .\nAs such it is an example of computationally tractable \u03b5-dominance result , as well as an example of a non-trivial but approximable allocation problem .\nIt is particularly interesting that we are able to compute the payments to n agents in a VCG-based mechanism in worst-case time O ( T log n ) , where T is the time complexity to compute the solution to a single allocation problem .", "lvl-4": "Approximately-Strategyproof and Tractable Multi-Unit Auctions\nABSTRACT\nWe present an approximately-efficient and approximatelystrategyproof auction mechanism for a single-good multi-unit allocation problem .\nThe bidding language in our auctions allows marginal-decreasing piecewise constant curves .\nFirst , we develop a fully polynomial-time approximation scheme for the multi-unit allocation problem , which computes a ( 1 + e ) approximation in worst-case time T = O ( n3/e ) , given n bids each with a constant number of pieces .\nSecond , we embed this approximation scheme within a Vickrey-Clarke-Groves ( VCG ) mechanism and compute payments to n agents for an asymptotic cost of O ( T log n ) .\nThe maximal possible gain from manipulation to a bidder in the combined scheme is bounded by e / ( 1 + e ) V , where V is the total surplus in the efficient outcome .\n1 .\nINTRODUCTION\nIn this paper we present a fully polynomial-time approximation scheme for the single-good multi-unit auction problem .\nOur scheme is both approximately efficient and approximately strategyproof .\nThe auction settings considered in our paper are motivated by recent trends in electronic commerce ; for instance , corporations are increasingly using auctions for their strategic sourcing .\nWe consider both a reverse auction variation and a forward auction variation , and propose a compact and expressive bidding language that allows marginal-decreasing piecewise constant curves .\nIn the reverse auction , we consider a single buyer with a demand for M units of a good and n suppliers , each with a marginal-decreasing piecewise-constant cost function .\nIn addition , each supplier can also express an upper bound , or capacity constraint on the number of units she can supply .\nThe reverse variation models , for example , a procurement auction to obtain raw materials or other services ( e.g. circuit boards , power suppliers , toner cartridges ) , with flexible-sized lots .\nIn the forward auction , we consider a single seller with M units of a good and n buyers , each with a marginal-decreasing piecewise-constant valuation function .\nA buyer can also express a lower bound , or minimum lot size , on the number of units she demands .\nThe forward variation models , for example , an auction to sell excess inventory in flexible-sized lots .\nWe consider the computational complexity of implementing the Vickrey-Clarke-Groves [ 22 , 5 , 11 ] mechanism for the multiunit auction problem .\nThe Vickrey-Clarke-Groves ( VCG ) mechanism has a number of interesting economic properties in this setting , including strategyproofness , such that truthful bidding is a dominant strategy for buyers in the forward auction and sellers in the reverse auction , and allocative efficiency , such that the outcome maximizes the total surplus in the system .\nHowever , as we discuss in Section 2 , the application of the VCG-based approach is limited in the reverse direction to instances in which the total payments to the sellers are less than the value of the outcome to the buyer .\nOtherwise , either the auction must run at a loss in these instances , or the buyer can not be expected to voluntarily choose to participate .\nThis is an example of the budget-deficit problem that often occurs in efficient mechanism design [ 17 ] .\nThe computational problem is interesting , because even with marginal-decreasing bid curves , the underlying allocation problem turns out to ( weakly ) intractable .\nFor instance , the classic 0/1 knapsack is a special case of this problem .1 We model the 1However , the problem can be solved easily by a greedy scheme if we remove all capacity constraints from the seller and all\nallocation problem as a novel and interesting generalization of the classic knapsack problem , and develop a fully polynomialtime approximation scheme , computing a ( 1 + ~ ) - approximation in worst-case time T = O ( n3 / \u03b5 ) , where each bid has a fixed number of piecewise constant pieces .\nGiven this scheme , a straightforward computation of the VCG payments to all n agents requires time O ( nT ) .\nThis upper-bound tends to 1 as the number of sellers increases .\nThe approximate VCG mechanism is ( \u03b5 1 + \u03b5 ) - strategyproof for an approximation to within ( 1 + ~ ) of the optimal allocation .\nThis means that a bidder can gain at most ( \u03b5 1 + \u03b5 ) V from a nontruthful bid , where V is the total surplus from the efficient allocation .\nSection 2 formally defines the forward and reverse auctions , and defines the VCG mechanisms .\nWe also prove our claims about \u03b5-strategyproofness .\nSection 3 provides the generalized knapsack formulation for the multi-unit allocation problems and introduces the fully polynomial time approximation scheme .\nSection 4 defines the approximation scheme for the payments in the VCG mechanism .\nSection 5 concludes .\n1.1 Related Work\nThere has been considerable interest in recent years in characterizing polynomial-time or approximable special cases of the general combinatorial allocation problem , in which there are multiple different items .\nThe combinatorial allocation problem ( CAP ) is both NP-complete and inapproximable ( e.g. [ 6 ] ) .\nWe identify a non-trivial but approximable allocation problem with an expressive exclusiveor bidding language -- the bid taker in our setting is allowed to accept at most one point on the bid curve .\nThe idea of using approximations within mechanisms , while retaining either full-strategyproofness or \u03b5-dominance has received some previous attention .\nFor instance , Lehmann et al. [ 15 ] propose a greedy and strategyproof approximation to a single-minded combinatorial auction problem .\nFeigenminimum-lot size constraints from the buyers .\nbaum & Shenker [ 8 ] have defined the concept of strategically faithful approximations , and proposed the study of approximations as an important direction for algorithmic mechanism design .\nEso et al. [ 7 ] have studied a similar procurement problem , but for a different volume discount model .\nThis earlier work formulates the problem as a general mixed integer linear program , and gives some empirical results on simulated data .\nKalagnanam et al. [ 12 ] address double auctions , where multiple buyers and sellers trade a divisible good .\nThe focus of this paper is also different : it investigates the equilibrium prices using the demand and supply curves , whereas our focus is on efficient mechanism design .\nAusubel [ 1 ] has proposed an ascending-price multi-unit auction for buyers with marginal-decreasing values [ 1 ] , with an interpretation as a primal-dual algorithm [ 2 ] .\n5 .\nCONCLUSIONS\nWe presented a fully polynomial-time approximation scheme for the single-good multi-unit auction problem , using marginal decreasing piecewise constant bidding language .\nOur scheme is both approximately efficient and approximately strategyproof within any specified factor \u03b5 > 0 .\nAs such it is an example of computationally tractable \u03b5-dominance result , as well as an example of a non-trivial but approximable allocation problem .\nIt is particularly interesting that we are able to compute the payments to n agents in a VCG-based mechanism in worst-case time O ( T log n ) , where T is the time complexity to compute the solution to a single allocation problem .", "lvl-2": "Approximately-Strategyproof and Tractable Multi-Unit Auctions\nABSTRACT\nWe present an approximately-efficient and approximatelystrategyproof auction mechanism for a single-good multi-unit allocation problem .\nThe bidding language in our auctions allows marginal-decreasing piecewise constant curves .\nFirst , we develop a fully polynomial-time approximation scheme for the multi-unit allocation problem , which computes a ( 1 + e ) approximation in worst-case time T = O ( n3/e ) , given n bids each with a constant number of pieces .\nSecond , we embed this approximation scheme within a Vickrey-Clarke-Groves ( VCG ) mechanism and compute payments to n agents for an asymptotic cost of O ( T log n ) .\nThe maximal possible gain from manipulation to a bidder in the combined scheme is bounded by e / ( 1 + e ) V , where V is the total surplus in the efficient outcome .\n1 .\nINTRODUCTION\nIn this paper we present a fully polynomial-time approximation scheme for the single-good multi-unit auction problem .\nOur scheme is both approximately efficient and approximately strategyproof .\nThe auction settings considered in our paper are motivated by recent trends in electronic commerce ; for instance , corporations are increasingly using auctions for their strategic sourcing .\nWe consider both a reverse auction variation and a forward auction variation , and propose a compact and expressive bidding language that allows marginal-decreasing piecewise constant curves .\nIn the reverse auction , we consider a single buyer with a demand for M units of a good and n suppliers , each with a marginal-decreasing piecewise-constant cost function .\nIn addition , each supplier can also express an upper bound , or capacity constraint on the number of units she can supply .\nThe reverse variation models , for example , a procurement auction to obtain raw materials or other services ( e.g. circuit boards , power suppliers , toner cartridges ) , with flexible-sized lots .\nIn the forward auction , we consider a single seller with M units of a good and n buyers , each with a marginal-decreasing piecewise-constant valuation function .\nA buyer can also express a lower bound , or minimum lot size , on the number of units she demands .\nThe forward variation models , for example , an auction to sell excess inventory in flexible-sized lots .\nWe consider the computational complexity of implementing the Vickrey-Clarke-Groves [ 22 , 5 , 11 ] mechanism for the multiunit auction problem .\nThe Vickrey-Clarke-Groves ( VCG ) mechanism has a number of interesting economic properties in this setting , including strategyproofness , such that truthful bidding is a dominant strategy for buyers in the forward auction and sellers in the reverse auction , and allocative efficiency , such that the outcome maximizes the total surplus in the system .\nHowever , as we discuss in Section 2 , the application of the VCG-based approach is limited in the reverse direction to instances in which the total payments to the sellers are less than the value of the outcome to the buyer .\nOtherwise , either the auction must run at a loss in these instances , or the buyer can not be expected to voluntarily choose to participate .\nThis is an example of the budget-deficit problem that often occurs in efficient mechanism design [ 17 ] .\nThe computational problem is interesting , because even with marginal-decreasing bid curves , the underlying allocation problem turns out to ( weakly ) intractable .\nFor instance , the classic 0/1 knapsack is a special case of this problem .1 We model the 1However , the problem can be solved easily by a greedy scheme if we remove all capacity constraints from the seller and all\nallocation problem as a novel and interesting generalization of the classic knapsack problem , and develop a fully polynomialtime approximation scheme , computing a ( 1 + ~ ) - approximation in worst-case time T = O ( n3 / \u03b5 ) , where each bid has a fixed number of piecewise constant pieces .\nGiven this scheme , a straightforward computation of the VCG payments to all n agents requires time O ( nT ) .\nWe compute approximate VCG payments in worst-case time O ( \u03b1T log ( \u03b1n / \u03b5 ) ) , where \u03b1 is a constant that quantifies a reasonable `` no-monopoly '' assumption .\nSpecifically , in the reverse auction , suppose that C ( _ T ) is the minimal cost for procuring M units with all sellers _ T , and C ( _ T \\ i ) is the minimal cost without seller i. Then , the constant \u03b1 is defined as an upper bound for the ratio C ( _ T \\ i ) / C ( _ T ) , over all sellers i .\nThis upper-bound tends to 1 as the number of sellers increases .\nThe approximate VCG mechanism is ( \u03b5 1 + \u03b5 ) - strategyproof for an approximation to within ( 1 + ~ ) of the optimal allocation .\nThis means that a bidder can gain at most ( \u03b5 1 + \u03b5 ) V from a nontruthful bid , where V is the total surplus from the efficient allocation .\nAs such , this is an example of a computationally-tractable \u03b5-dominance result .2 In practice , we can have good confidence that bidders without good information about the bidding strategies of other participants will have little to gain from attempts at manipulation .\nSection 2 formally defines the forward and reverse auctions , and defines the VCG mechanisms .\nWe also prove our claims about \u03b5-strategyproofness .\nSection 3 provides the generalized knapsack formulation for the multi-unit allocation problems and introduces the fully polynomial time approximation scheme .\nSection 4 defines the approximation scheme for the payments in the VCG mechanism .\nSection 5 concludes .\n1.1 Related Work\nThere has been considerable interest in recent years in characterizing polynomial-time or approximable special cases of the general combinatorial allocation problem , in which there are multiple different items .\nThe combinatorial allocation problem ( CAP ) is both NP-complete and inapproximable ( e.g. [ 6 ] ) .\nAlthough some polynomial-time cases have been identified for the CAP [ 6 , 20 ] , introducing an expressive exclusive-or bidding language quickly breaks these special cases .\nWe identify a non-trivial but approximable allocation problem with an expressive exclusiveor bidding language -- the bid taker in our setting is allowed to accept at most one point on the bid curve .\nThe idea of using approximations within mechanisms , while retaining either full-strategyproofness or \u03b5-dominance has received some previous attention .\nFor instance , Lehmann et al. [ 15 ] propose a greedy and strategyproof approximation to a single-minded combinatorial auction problem .\nNisan & Ronen [ 18 ] discussed approximate VCG-based mechanisms , but either appealed to particular maximal-in-range approximations to retain full strategyproofness , or to resource-bounded agents with information or computational limitations on the ability to compute strategies .\nFeigenminimum-lot size constraints from the buyers .\n2However , this may not be an example of what Feigenbaum & Shenker refer to as a tolerably-manipulable mechanism [ 8 ] because we have not tried to bound the effect of such a manipulation on the efficiency of the outcome .\nVCG mechanism do have a natural `` self-correcting '' property , though , because a useful manipulation to an agent is a reported value that improves the total value of the allocation based on the reports of other agents and the agent 's own value .\nbaum & Shenker [ 8 ] have defined the concept of strategically faithful approximations , and proposed the study of approximations as an important direction for algorithmic mechanism design .\nSchummer [ 21 ] and Parkes et al [ 19 ] have previously considered \u03b5-dominance , in the context of economic impossibility results , for example in combinatorial exchanges .\nEso et al. [ 7 ] have studied a similar procurement problem , but for a different volume discount model .\nThis earlier work formulates the problem as a general mixed integer linear program , and gives some empirical results on simulated data .\nKalagnanam et al. [ 12 ] address double auctions , where multiple buyers and sellers trade a divisible good .\nThe focus of this paper is also different : it investigates the equilibrium prices using the demand and supply curves , whereas our focus is on efficient mechanism design .\nAusubel [ 1 ] has proposed an ascending-price multi-unit auction for buyers with marginal-decreasing values [ 1 ] , with an interpretation as a primal-dual algorithm [ 2 ] .\n2 .\nAPPROXIMATELY-STRATEGYPROOF VCG AUCTIONS\nIn this section , we first describe the marginal-decreasing piecewise bidding language that is used in our forward and reverse auctions .\nContinuing , we introduce the VCG mechanism for the problem and the \u03b5-dominance results for approximations to VCG outcomes .\nWe also discuss the economic properties of VCG mechanisms in these forward and reverse auction multi-unit settings .\n2.1 Marginal-Decreasing Piecewise Bids\nWe provide a piecewise-constant and marginal-decreasing bidding language .\nThis bidding language is expressive for a natural class of valuation and cost functions : fixed unit prices over intervals of quantities .\nSee Figure 1 for an example .\nIn addition , we slightly relax the marginal-decreasing requirement to allow : a bidder in the forward auction to state a minimal purchase amount , such that she has zero value for quantities smaller than that amount ; a seller in the reverse auction to state a capacity constraint , such that she has an effectively infinite cost to supply quantities in excess of a particular amount .\nFigure 1 : Marginal-decreasing , piecewise constant bids .\nIn the forward auction bid , the bidder offers $ 10 per unit for quantity in the range [ 5 , 10 ) , $ 8 per unit in the range [ 10 , 20 ) , and $ 7 in the range [ 20 , 25 ] .\nHer valuation is zero for quantities outside the range [ 10 , 25 ] .\nIn the reverse auction bid , the cost of the seller is \u221e outside the range [ 10 , 25 ] .\nIn detail , in a forward auction , a bid from buyer i can be written as a list of ( quantity-range , unit-price ) tuples , ( ( u1i , p1i ) ,\ni on the quantity .\nThe interpretation is that the bidder 's valuation in the\n( semi-open ) quantity range [ uji , uj +1 i ) is pji for each unit .\nAdditionally , it is assumed that the valuation is 0 for quantities less than u1i as well as for quantities more than umi .\nThis is implemented by adding two dummy bid tuples , with zero prices in the range [ 0 , u1i ) and ( umi i , \u221e ) .\nWe interpret the bid list as defining a price function , pbid , i ( q ) = qpji , if uji < q < uj +1\nA seller 's bid is similarly defined in the reverse auction .\nThe interpretation is that the bidder 's cost in the ( semi-open ) quantity range [ uji , uj +1 i ) is pji for each unit .\nAdditionally , it is assumed that the cost is \u221e for quantities less than u1i as well as for quantities more than umi .\nEquivalently , the unit prices in the ranges [ 0 , u1i ) and ( umi , \u221e ) are infinity .\nWe interpret the bid list as defining a price function , pask , i ( q ) = qpji , if uji < q < uj +1 i .\n2.2 VCG-Based Multi-Unit Auctions\nWe construct the tractable and approximately-strategyproof multiunit auctions around a VCG mechanism .\nWe assume that all agents have quasilinear utility functions ; that is , ui ( q , p ) = vi ( q ) \u2212 p , for a buyer i with valuation vi ( q ) for q units at price p , and ui ( q , p ) = p \u2212 ci ( q ) for a seller i with cost ci ( q ) at price p .\nThis is a standard assumption in the auction literature , equivalent to assuming risk-neutral agents [ 13 ] .\nWe will use the term payoff interchangeably for utility .\nIn the forward auction , there is a seller with M units to sell .\nWe assume that this seller has no intrinsic value for the items .\nGiven a set of bids from I agents , let V ( I ) denote the maximal revenue to the seller , given that at most one point on the bid curve can be selected from each agent and no more than M units of the item can be sold .\nLet x * = ( x * 1 , ... , x * N ) denote the solution to this winner - determination problem , where x * i is the number of units sold to agent i. Similarly , let V ( I \\ i ) denote the maximal revenue to the seller without bids from agent i .\nThe VCG mechanism is defined as follows :\n1 .\nReceive piecewise-constant bid curves and capacity constraints from all the buyers .\n2 .\nImplement the outcome x * that solves the winner-determination problem with all buyers .\n3 .\nCollect payment pvcg , i = pbid , i ( x * i ) \u2212 [ V ( I ) \u2212 V ( I \\ i ) ] from each buyer , and pass the payments to the seller .\nIn this forward auction , the VCG mechanism is strategyproof for buyers , which means that truthful bidding is a dominant strategy , i.e. utility maximizing whatever the bids of other buyers .\nIn addition , the VCG mechanism is allocatively-efficient , and the payments from each buyer are always positive .3 Moreover , each buyer pays less than its value , and receives payoff V ( I ) \u2212 V ( I \\ i ) in equilibrium ; this is precisely the marginal-value that buyer i contributes to the economic efficiency of the system .\nIn the reverse auction , there is a buyer with M units to buy , and n suppliers .\nWe assume that the buyer has value V > 0 to purchase all M units , but zero value otherwise .\nTo simplify the mechanism design problem we assume that the buyer will truthfully announce this value to the mechanism .4 The winner\nan efficient trading mechanism in this setting .\ndetermination problem in the reverse auction is to determine the allocation , x * , that minimizes the cost to the buyer , or forfeits trade if the minimal cost is greater than value , V .\nLet C ( I ) denote the minimal cost given bids from all sellers , and let C ( I \\ i ) denote the minimal cost without bids from seller i .\nWe can assume , without loss of generality , that there is an efficient trade and V > C ( I ) .\nOtherwise , then the efficient outcome is no trade , and the outcome of the VCG mechanism is no trade and no payments .\nThe VCG mechanism implements the outcome x * that minimizes cost based on bids from all sellers , and then provides payment pvcg , i = pask , i ( x * i ) + [ V \u2212 C ( I ) \u2212 max ( 0 , V \u2212 C ( I \\ i ) ) ] to each seller .\nThe total payment is collected from the buyer .\nAgain , in equilibrium each seller 's payoff is exactly the marginal-value that the seller contributes to the economic efficiency of the system ; in the simple case that V > C ( I \\ i ) for all sellers i , this is precisely C ( I \\ i ) \u2212 C ( I ) .\nAlthough the VCG mechanism remains strategyproof for sellers in the reverse direction , its applicability is limited to cases in which the total payments to the sellers are less than the buyer 's value .\nOtherwise , there will be instances in which the buyer will not choose to voluntarily participate in the mechanism , based on its own value and its beliefs about the costs of sellers .\nThis leads to a loss in efficiency when the buyer chooses not to participate , because efficient trades are missed .\nThis problem with the size of the payments , does not occur in simple single-item reverse auctions , or even in multi-unit reverse auctions with a buyer that has a constant marginal-valuation for each additional item that she procures .5 Intuitively , the problem occurs in the reverse multi-unit setting because the buyer demands a fixed number of items , and has zero value without them .\nThis leads to the possibility of the trade being contingent on the presence of particular , so-called `` pivotal '' sellers .\nDefine a seller i as pivotal , if C ( I ) < V but C ( I \\ i ) > V .\nIn words , there would be no efficient trade without the seller .\nAny time there is a pivotal seller , the VCG payments to that seller allow her to extract all of the surplus , and the payments are too large to sustain with the buyer 's value unless this is the only winning seller .\nConcretely , we have this participation problem in the reverse auction when the total payoff to the sellers , in equilibrium , exceeds the total payoff from the efficient allocation :\nAs stated above , first notice that we require V > C ( I \\ i ) for all sellers i .\nIn other words , there must be no pivotal sellers .\nGiven this , it is then necessary and sufficient that :\n5To make the reverse auction symmetric with the forward direction , we would need a buyer with a constant marginal-value to buy the first M units , and zero value for additional units .\nThe payments to the sellers would never exceed the buyer 's value in this case .\nConversely , to make the forward auction symmetric with the reverse auction , we would need a seller with a constant ( and high ) marginal-cost to sell anything less than the first M units , and then a low ( or zero ) marginal cost .\nThe total payments received by the seller can be less than the seller 's cost for the outcome in this case .\nIn words , the surplus of the efficient allocation must be greater than the total marginal-surplus provided by each seller .6 Consider an example with 3 agents f1 , 2 , 3 } , and V = 150 and C ( 123 ) = 50 .\nCondition ( 1 ) holds when C ( 12 ) = C ( 23 ) = 70 and C ( 13 ) = 100 , but not when C ( 12 ) = C ( 23 ) = 80 and C ( 13 ) = 100 .\nIn the first case , the agent payoffs \u03c0 = ( \u03c00 , \u03c01 , \u03c02 , \u03c03 ) , where 0 is the seller , is ( 10 , 20 , 50 , 20 ) .\nIn the second case , the payoffs are \u03c0 = ( \u2212 10 , 30 , 50 , 30 ) .\nOne thing we do know , because the VCG mechanism will maximize the payoff to the buyer across all efficient mechanisms [ 14 ] , is that whenever Eq .\n1 is not satisfied there can be no efficient auction mechanism .7\n2.3 \u03b5-Strategyproofness\nWe now consider the same VCG mechanism , but with an approximation scheme for the underlying allocation problem .\nWe derive an \u03b5-strategyproofness result , that bounds the maximal gain in payoff that an agent can expect to achieve through a unilateral deviation from following a simple truth-revealing strategy .\nWe describe the result for the forward auction direction , but it is quite a general observation .\nAs before , let V ( Z ) denote the value of the optimal solution to the allocation problem with truthful bids from all agents , and V ( Z \\ i ) denote the value of the optimal solution computed without bids from agent i. Let V\u02c6 ( Z ) and V\u02c6 ( Z \\ i ) denote the value of the allocation computed with an approximation scheme , and assume that the approximation satisfies :\nfor some ~ > 0 .\nWe provide such an approximation scheme for our setting later in the paper .\nLet x\u02c6 denote the allocation implemented by the approximation scheme .\nThe payoff to agent i , for announcing valuation \u02c6vi , is :\nThe final term is independent of the agent 's announced value , and can be ignored in an incentive-analysis .\nHowever , agent i can try to improve its payoff through the effect of its announced value on the allocation x\u02c6 implemented by the mechanism .\nIn particular , agent i wants the mechanism to select x\u02c6 to maximize the sum of its true value , vi ( \u02c6xi ) , and the reported value of the other agents , Ej ~ = i \u02c6vj ( \u02c6xj ) .\nIf the mechanism 's allocation algorithm is optimal , then all the agent needs to do is truthfully state its value and the mechanism will do the rest .\nHowever , faced with an approximate allocation algorithm , the agent can try to improve its payoff by announcing a value that corrects for the approximation , and causes the approximation algorithm to implement the allocation that exactly maximizes the total reported value of the other agents together with its own actual value [ 18 ] .\n6This condition is implied by the agents are substitutes requirement [ 3 ] , that has received some attention in the combinatorial auction literature because it characterizes the case in which VCG payments can be supported in a competitive equilibrium .\nUseful characterizations of conditions that satisfy agents are substitutes , in terms of the underlying valuations of agents have proved quite elusive .\n7Moreover , although there is a small literature on maximallyefficient mechanisms subject to requirements of voluntaryparticipation and budget-balance ( i.e. with the mechanism neither introducing or removing money ) , analytic results are only known for simple problems ( e.g. [ 16 , 4 ] ) .\nWe can now analyze the best possible gain from manipulation to an agent in our setting .\nWe first assume that the other agents are truthful , and then relax this .\nIn both cases , the maximal benefit to agent i occurs when the initial approximation is worst-case .\nWith truthful reports from other agents , this occurs when the value of choice x\u02c6 is V ( Z ) / ( 1 + \u03b5 ) .\nThen , an agent could hope to receive an improved payoff of : V ( Z ) \u03b5\nThis is possible if the agent is able to select a reported type to correct the approximation algorithm , and make the algorithm implement the allocation with value V ( Z ) .\nThus , if other agents are truthful , and with a ( 1 + \u03b5 ) - approximation scheme to the allocation problem , then no agent can improve its payoff by more than a factor \u03b5 / ( 1 + \u03b5 ) of the value of the optimal solution .\nThe analysis is very similar when the other agents are not truthful .\nIn this case , an individual agent can improve its payoff by no more than a factor ~ / ( 1 + ~ ) of the value of the optimal solution given the values reported by the other agents .\nLet V in the following theorem define the total value of the efficient allocation , given the reported values of agents j = ~ i , and the true value of agent i.\nNotice that we did not need to bound the error on the allocation problems without each agent , because the ~ - strategyproofness result follows from the accuracy of the first-term in the VCG payment and is independent of the accuracy of the second-term .\nHowever , the accuracy of the solution to the problem without each agent is important to implement a good approximation to the revenue properties of the VCG mechanism .\n3 .\nTHE GENERALIZED KNAPSACK PROBLEM\nIn this section , we design a fully polynomial approximation scheme for the generalized knapsack , which models the winnerdetermination problem for the VCG-based multi-unit auctions .\nWe describe our results for the reverse auction variation , but the formulation is completely symmetric for the forward-auction .\nIn describing our approximation scheme , we begin with a simple property ( the Anchor property ) of an optimal knapsack solution .\nWe use this property to develop an O ( n2 ) time 2-approximation for the generalized knapsack .\nIn turn , we use this basic approximation to develop our fully polynomial-time approximation scheme ( FPTAS ) .\nOne of the major appeals of our piecewise bidding language is its compact representation of the bidder 's valuation functions .\nWe strive to preserve this , and present an approximation scheme that will depend only on the number of bidders , and not the maximum quantity , M , which can be very large in realistic procurement settings .\nThe FPTAS implements an ( 1 + \u03b5 ) approximation to the optimal solution x \u2217 , in worst-case time T = O ( n3 / \u03b5 ) , where n is the number of bidders , and where we assume that the piecewise bid for each bidder has O ( 1 ) pieces .\nThe dependence on the number of pieces is also polynomial : if each bid has a maximum\nof c pieces , then the running time can be derived by substituting nc for each occurrence of n.\n3.1 Preliminaries\nBefore we begin , let us recall the classic 0/1 knapsack problem : we are given a set of n items , where the item i has value vi and size si , and a knapsack of capacity M ; all sizes are integers .\nThe goal is to determine a subset of items of maximum value with total size at most M .\nSince we want to focus on a reverse auction , the equivalent knapsack problem will be to choose a set of items with minimum value ( i.e. cost ) whose size exceeds M .\nThe generalized knapsack problem of interest to us can be defined as follows :\nGeneralized Knapsack :\nInstance : A target M , and a set of n lists , where the ith list has the form\nwhere uj i are increasing with j and pji are decreasing with j , and uji , pji , M are positive integers .\nProblem : Determine a set of integers xji such that 1 .\n( One per list ) At most one xji is non-zero for any i , 2 .\n( Membership ) xji = ~ 0 implies xji E [ uji , uj +1 i ) , 4 .\n( Objective ) EiEj pj 3 .\n( Target ) EiEj xji > M , and ixji is minimized .\nThis generalized knapsack formulation is a clear generalization of the classic 0/1 knapsack .\nIn the latter , each list consists of a single point ( si , vi ) .8 The connection between the generalized knapsack and our auction problem is transparent .\nEach list encodes a bid , representing multiple mutually exclusive quantity intervals , and one can choose any quantity in an interval , but at most one interval can be selected .\nChoosing interval [ uji , uj +1 i ) has cost pji per unit .\nThe goal is to procure at least M units of the good at minimum possible cost .\nThe problem has some flavor of the continuous knapsack problem .\nHowever , there are two major differences that make our problem significantly more difficult : ( 1 ) intervals have boundaries , and so to choose interval [ uji , uj +1 i ) requires that at least uji and at most uj +1 i units must be taken ; ( 2 ) unlike the classic knapsack , we can not sort the items ( bids ) by value/size , since different intervals in one list have different unit costs .\n3.2 A 2-Approximation Scheme\nWe begin with a definition .\nGiven an instance of the generalized knapsack , we call each tuple tji = ( uji , pji ) an anchor .\nRecall that these tuples represent the breakpoints in the piecewise constant curve bids .\nWe say that the size of an anchor tji is uji , 8In fact , because of the `` one per list '' constraint , the generalized problem is closer in spirit to the multiple choice knapsack problem [ 9 ] , where the underling set of items is partitioned into disjoint subsets U1 , U2 , ... , Uk , and one can choose at most one item from each subset .\nPTAS do exist for this problem [ 10 ] , and indeed , one can convert our problem into a huge instance of the multiple choice knapsack problem , by creating one group for each list ; put a ( quantity , price ) point tuple ( x , p ) for each possible quantity for a bidder into his group ( subset ) .\nHowever , this conversion explodes the problem size , making it infeasible for all but the most trivial instances .\nthe minimum number of units available at this anchor 's price pji .\nThe cost of the anchor tji is defined to be the minimum total price associated with this tuple , namely , cost ( tji ) = pji uji if j < mi , and cost ( tmi\nIn a feasible solution { x1 , x2 , ... , xn } of the generalized knapsack , we say that an element xi = ~ 0 is an anchor if xi = uji , for some anchor uji .\nOtherwise , we say that xi is midrange .\nWe observe that an optimal knapsack solution can always be constructed so that at most one solution element is midrange .\nIf there are two midrange elements x and x ' , for bids from two different agents , with x < x ' , then we can increment x ' and decrement x , until one of them becomes an anchor .\nSee Figure 2 for an example .\nLEMMA 1 .\n[ Anchor Property ] There exists an optimal solution of the generalized knapsack problem with at most one midrange element .\nAll other elements are anchors .\nFigure 2 : ( i ) An optimal solution with more than one bid not anchored ( 2,3 ) ; ( ii ) an optimal solution with only one bid ( 3 ) not anchored .\nWe use the anchor property to first obtain a polynomial-time 2-approximation scheme .\nWe do this by solving several instances of a restricted generalized-knapsack problem , which we call iKnapsack , where one element is forced to be midrange for a particular interval .\nSpecifically , suppose element x ~ for agent l is forced to lie in its jth range , [ uj ~ , uj +1 ~ ) , while all other elements , x1 , ... , xl \u2212 1 , xl +1 , xn , are required to be anchors , or zero .\nThis corresponds to the restricted problem iKnapsack ( f , j ) , in which the goal is to obtain at least M -- uj ~ units with minimum cost .\nElement x ~ is assumed to have already contributed uj ~ units .\nThe value of a solution to iKnapsack ( f , j ) represents the minimal additional cost to purchase the rest of the units .\nWe create n -- 1 groups of potential anchors , where ith group contains all the anchors of the list i in the generalized knapsack .\nThe group for agent l contains a single element that represents the interval [ 0 , uj +1 ~ -- uj ~ ) , and the associated unit-price pj ~ .\nThis interval represents the excess number of units that can be taken from agent l in iKnapsack ( f , j ) , in addition to uj ~ , which has already been committed .\nIn any other group , we can choose at most one anchor .\nThe following pseudo-code describes our algorithm for this restriction of the generalized knapsack problem .\nU is the union of all the tuples in n groups , including a tuple t ~ for agent l .\nThe size of this special tuple is defined as uj +1 ~ -- uj ~ , and the cost is defined aspj l ( uj +1 ~ -- uj ~ ) .\nR is the number of units that remain to be acquired .\nS is the set of tuples accepted in the current tentative\nsolution .\nBest is the best solution found so far .\nVariable Skip is only used in the proof of correctness .\nAlgorithm Greedy ( f , j ) 1 .\nSort all tuples of U in the ascending order of unit price ; in case of ties , sort in ascending order of unit quantities .\n2 .\nSet mark ( i ) = 0 , for all lists i = 1 , 2 , ... , n. Initialize R = M \u2212 uj ~ , S = Best = Skip = 0 .\n3 .\nScan the tuples in U in the sorted order .\nSuppose the next\ntuple is tki , i.e. the kth anchor from agent i .\nIf mark ( i ) = 1 , ignore this tuple ; otherwise do the following steps :\nfrom R.\nThe approximation algorithm is very similar to the approximation algorithm for knapsack .\nSince we wish to minimize the total cost , we consider the tuples in order of increasing per unit cost .\nIf the size of tuple tki is smaller than R , then we add it to S , update R , and delete from U all the tuples that belong to the same group as tki .\nIf size ( tki ) is greater than R , then S along with tki forms a feasible solution .\nHowever , this solution can be far from optimal if the size of tki is much larger than R .\nIf total cost of S and tki is smaller than the current best solution , we update Best .\nOne exception to this rule is the tuple t ~ .\nSince this tuple can be taken fractionally , we update Best if the sum of S 's cost and fractional cost of t ~ is an improvement .\nThe algorithm terminates in either of the first two cases , or when all tuples are scanned .\nIn particular , it terminates whenever we find a tki such that size ( tki ) is greater than R but cost ( tki ) is less than cost ( S ) , or when we reach the tuple representing agent l and it gives a feasible solution .\nPROOF .\nLet V ( B , j ) be the value returned by Greedy ( f , j ) and let V * ( B , j ) be an optimal solution for iKnapsack ( f , j ) .\nConsider the set Skip at the termination of Greedy ( f , j ) .\nThere are two cases to consider : either some tuple t E Skip is also in V * ( B , j ) , or no tuple in Skip is in V * ( B , j ) .\nIn the first case , let St be the tentative solution S at the time t was added to Skip .\nBecause t E Skip then size ( t ) > R , and St together with t forms a feasible solution , and we have :\nIn the second case , imagine a modified instance of iKnapsack ( f , j ) , which excludes all the tuples of the set Skip .\nSince none of these tuples were included in V * ( B , j ) , the optimal solution for the modified problem should be the same as the one for the original .\nSuppose our approximation algorithm returns the value V ' ( B , j ) for this modified instance .\nLet t ' be the last tuple considered by the approximation algorithm before termination on the modified instance , and let Sty be the corresponding tentative solution set in that step .\nSince we consider tuples in order of increasing per unit price , and none of the tuples are going to be placed in the set Skip , we must have cost ( Sty ) < V * ( f , j ) because Sty is the optimal way to obtain size ( Sty ) .\nWe also have cost ( t ' ) < cost ( Sty ) , and the following inequalities :\nThe inequality V ( B , j ) < V ' ( B , j ) follows from the fact that a tuple in the Skip list can only affect the Best but not the tentative solutions .\nTherefore , dropping the tuples in the set Skip can only make the solution worse .\nThe above argument has shown that the value returned by Greedy ( f , j ) is within a factor 2 of the optimal solution for iKnapsack ( f , j ) .\nWe now show that the value V ( B , j ) plus cost ( tj ~ ) is a 2-approximation of the original generalized knapsack problem .\nLet A * be an optimal solution of the generalized knapsack , and suppose that element xj ~ is midrange .\nLet x_~ to be set of the remaining elements , either zero or anchors , in this solution .\nFurthermore , define x ' ~ = xj ~ \u2212 uj ~ .\nThus ,\nIt is easy to see that ( x_~ , x ' ~ ) is an optimal solution for iKnapsack ( f , j ) .\nSince V ( B , j ) is a 2-approximation for this optimal solution , we have the following inequalities :\nThis completes the proof of Lemma 2 .\nIt is easy to see that , after an initial sorting of the tuples in U , the algorithm Greedy ( f , j ) takes O ( n ) time .\nWe have our first polynomial approximation algorithm .\nTHEOREM 2 .\nA 2-approximation of the generalized knapsack problem can be found in time O ( n2 ) , where n is number of item lists ( each of constant length ) .\nPROOF .\nWe run the algorithm Greedy ( f , j ) once for each tuple ( l , j ) as a candidate for midrange .\nThere are O ( n ) tuples , and it suffices to sort them once , the total cost of the algorithm is O ( n2 ) .\nBy Lemma 1 , there is an optimal solution with at most one midrange element , so our algorithm will find a 2-approximation , as claimed .\nThe dependence on the number of pieces is also polynomial : if each bid has a maximum of c pieces , then the running time is O ( ( nc ) 2 ) .\n3.3 An Approximation Scheme\nWe now use the 2-approximation algorithm presented in the preceding section to develop a fully polynomial approximation ( FPTAS ) for the generalized knapsack problem .\nThe high level idea is fairly standard , but the details require technical care .\nWe use a dynamic programming algorithm to solve iKnapsack ( ~ , j ) for each possible midrange element , with the 2-approximation algorithm providing an upper bound on the value of the solution and enabling the use of scaling on the cost dimension of the dynamic programming ( DP ) table .\nConsider , for example , the case that the midrange element is x ~ , which falls in the range [ uj ~ , uj +1 ~ ) .\nIn our FPTAS , rather than using a greedy approximation algorithm to solve iKnapsack ( ~ , j ) , we construct a dynamic programming table to compute the minimum cost at which at least M \u2212 uj +1 ~ units can be obtained using the remaining n \u2212 1 lists in the generalized knapsack .\nSuppose G [ i , r ] denotes the maximum number of units that can be obtained at cost at most r using only the first i lists in the generalized knapsack .\nThen , the following recurrence relation describes how to construct the dynamic programming table :\nwhere \u03b2 ( i , r ) = { j : 1 < j < mi , cost ( tji ) < r } , is the set of anchors for agent i .\nAs convention , agent i will index the row , and cost r will index the column .\nThis dynamic programming algorithm is only pseudo-polynomial , since the number of column in the dynamic programming table depends upon the total cost .\nHowever , we can convert it into a FPTAS by scaling the cost dimension .\nLet A denote the 2-approximation to the generalized knapsack problem , with total cost , cost ( A ) .\nLet \u03b5 denote the desired approximation factor .\nWe compute the scaled cost of a tuple tji , denoted scost ( tji ) , as\nThis scaling improves the running time of the algorithm because the number of columns in the modified table is at most n\u03b5 1 , and independent of the total cost .\nHowever , the computed solution might not be an optimal solution for the original problem .\nWe show that the error introduced is within a factor of \u03b5 of the optimal solution .\nAs a prelude to our approximation guarantee , we first show that if two different solutions to the iKnapsack problem have equal scaled cost , then their original ( unscaled ) costs can not differ by more than \u03b5cost ( A ) .\nLEMMA 3 .\nLet x and y be two distinct feasible solutions of iKnapsack ( ~ , j ) , excluding their midrange elements .\nIf x and y have equal scaled costs , then their unscaled costs can not differ by more than \u03b5cost ( A ) .\nPROOF .\nLet Ix and Iy , respectively , denote the indicator functions associated with the anchor vectors x and y -- there is 1 in position Ix [ i , k ] if the xki > 0 .\nSince x and y has equal scaled cost , However , by ( 2 ) , the scaled costs satisfy the following inequalities :\nSubstituting the upper-bound on scaled cost from ( 4 ) for cost ( x ) , the lower-bound on scaled cost from ( 4 ) for cost ( y ) , and using equality ( 3 ) to simplify , we have :\nThe last inequality uses the fact that at most n components of an indicator vector are non-zero ; that is , any feasible solution contains at most n tuples .\nFinally , given the dynamic programming table for iKnapsack ( ~ , j ) , we consider all the entries in the last row of this table , G [ n \u2212 1 , r ] .\nThese entries correspond to optimal solutions with all agents except l , for different levels of cost .\nIn particular , we consider the entries that provide at least M \u2212 uj +1 ~ units .\nTogether with a contribution from agent l , we choose the entry in this set that minimizes the total cost , defined as follows :\nPROOF .\nLet x_~ denote the vector of the elements in solution A * without element l. Then , by definition , cost ( A * ) = cost -LRB-x_~-RRB- + pj ~ xj ~ .\nLet r = scost -LRB-x_~-RRB- be the scaled cost associated with the vector x_~ .\nNow consider the dynamic programming table constructed for iKnapsack ( ~ , j ) , and consider its entry G [ n \u2212 1 , r ] .\nLet A denote the 2-approximation to the generalized knapsack problem , and A ( l , j ) denote the solution from the dynamic-programming algorithm .\nSuppose y _ ~ is the solution associated with this entry in our dynamic program ; the components of the vector y _ ~ are the quantities from different lists .\nSince both x_~ and y _ ~ have equal scaled costs , by Lemma 3 , their unscaled costs are within \u03b5cost ( A ) of each other ; that is ,\nNow , define yj ~ = max { uj ~ , M \u2212 di ~ = ~ j yji } ; this is the contribution needed from ~ to make ( y _ ~ , yj ~ ) a feasible solution .\nAmong all the equal cost solutions , our dynamic programming tables chooses the one with maximum units .\nTherefore ,\nwhere cost ( ) is the original , unscaled cost associated with entry G [ n \u2212 1 , r ] .\nIt is worth noting , that unlike the 2-approximation scheme for iKnapsack ( ~ , j ) , the value computed with this FPTAS includes the cost to acquire ujl units from l .\nThe following lemma shows that we achieve a ( 1 + \u03b5 ) - approximation .\nLEMMA 4 .\nSuppose A * is an optimal solution of the generalized knapsack problem , and suppose that element ( l , j ) is midrange in the optimal solution .\nThen , the solution A ( l , j ) from running the scaled dynamic-programming algorithm on iKnapsack ( ~ , j ) satisfies i ~ = ~ k scost ( tki ) Ix [ i , k ] = scost ( tki ) Iy [ i , k ] ( 3 ) i ~ = ~ yji \u2265 j xj i ~ = ~ k j i ~ = ~ i\nTherefore , it must be the case that yj ~ < xj ~ .\nBecause ( yj ~ , y _ ~ ) is also a feasible solution , if our algorithm returns a solution with cost cost ( A ( l , j ) ) , then we must have\nwhere we use the fact that cost ( A ) < 2cost ( A * ) .\nPutting this together , our approximation scheme for the generalized knapsack problem will iterate the scheme described above for each choice of the midrange element ( l , j ) , and choose the best solution from among these O ( n ) solutions .\nFor a given midrange , the most expensive step in the algorithm is the construction of dynamic programming table , which can be done in O ( n2 / \u03b5 ) time assuming constant intervals per list .\nThus , we have the following result .\nThe dependence on the number of pieces is also polynomial : if each bid has a maximum of c pieces , then the running time can be derived by substituting cn for each occurrence of n.\n4 .\nCOMPUTING VCG PAYMENTS\nWe now consider the related problem of computing the VCG payments for all the agents .\nA naive approach requires solving the allocation problem n times , removing each agent in turn .\nIn this section , we show that our approximation scheme for the generalized knapsack can be extended to determine all n payments in total time O ( \u03b1T log ( \u03b1n / \u03b5 ) ) , where 1 < C ( Z \\ i ) / C ( Z ) < \u03b1 , for a constant upper bound , \u03b1 , and T is the complexity of solving the allocation problem once .\nThis \u03b1-bound can be justified as a `` no monopoly '' condition , because it bounds the marginal value that a single buyer brings to the auction .\nSimilarly , in the reverse variation we can compute the VCG payments to each seller in time O ( \u03b1T log ( \u03b1n / \u03b5 ) ) , where \u03b1 bounds the ratio C ( Z \\ i ) / C ( Z ) for all i .\nOur overall strategy will be to build two dynamic programming tables , forward and backward , for each midrange element ( l , j ) once .\nThe forward table is built by considering the agents in the order of their indices , where as the backward table is built by considering them in the reverse order .\nThe optimal solution corresponding to C ( Z \\ i ) can be broken into two parts : one corresponding to first ( i \u2212 1 ) agents and the other corresponding to last ( n \u2212 i ) agents .\nAs the ( i \u2212 1 ) th row of the forward table corresponds to the sellers with first ( i \u2212 1 ) indices , an approximation to the first part will be contained in ( i \u2212 1 ) th row of the forward table .\nSimilarly , ( n \u2212 i ) th row of the backward table will contain an approximation for the second part .\nWe first present a simple but an inefficient way of computing the approximate value of C ( Z \\ i ) , which illustrates the main idea of our algorithm .\nThen we present an improved scheme , which uses the fact that the elements in the rows are sorted , to compute the approximate value more efficiently .\nIn the following , we concentrate on computing an allocation with xj ~ being midrange , and some agent i = ~ l removed .\nThis will be a component in computing an approximation to C ( Z \\ i ) , the value of the solution to the generalized knapsack without bids from agent i .\nWe begin with the simple scheme .\n4.1 A Simple Approximation Scheme\nWe implement the scaled dynamic programming algorithm for iKnapsack ( f , j ) with two alternate orderings over the other sellers , k = ~ l , one with sellers ordered 1 , 2 , ... , n , and one with sellers ordered n , n \u2212 1 , ... , 1 .\nWe call the first table the forward table , and denote it F ~ , and the second table the backward table , and denote it Bl .\nThe subscript f reminds us that the agent f is midrange .9 In building these tables , we use the same scaling factor as before ; namely , the cost of a tuple tji is scaled as follows :\nwhere cost ( A ) is the upper bound on C ( Z ) , given by our 2approximation scheme .\nIn this case , because C ( Z \\ i ) can be \u03b1 times C ( Z ) , the scaled value of C ( Z \\ i ) can be at most n\u03b1 / \u03b5 .\nTherefore , the cost dimension of our dynamic program 's table will be n\u03b1 / \u03b5 .\nFigure 3 : Computing VCG payments .\nm = n\u03b1\u03b5\nNow , suppose we want to compute a ( 1 + e ) - approximation to the generalized knapsack problem restricted to element ( l , j ) midrange , and further restricted to remove bids from some seller i = ~ l. Call this problem iKnapsack_i ( f , j ) .\nRecall that the ith row of our DP table stores the best solution possible using only the first i agents excluding agent l , all of them either cleared at zero , or on anchors .\nThese first i agents are a different subset of agents in the forward and the backward tables .\nBy carefully combining one row of Fl with one row of Bl we can compute an approximation to iKnapsack_i ( f , j ) .\nWe consider the row of Fl that corresponds to solutions constructed from agents { 1 , 2 , ... , i \u2212 1 } , skipping agent l .\nWe consider the row of Bl that corresponds to solutions constructed from agents { i +1 , i +2 , ... , n } , again skipping agent l .\nThe rows are labeled Fl ( i \u2212 1 ) and Bl ( n \u2212 i ) respectively .10 The scaled costs for acquiring these units are the column indices for these entries .\nTo solve iKnapsack_i ( B , j ) we choose one entry from row F ~ ( i \u2212 1 ) and one from row B ~ ( n \u2212 i ) such that their total quantity exceeds\n~ and their combined cost is minimum over all such combinations .\nFormally , let g G Fl ( i \u2212 1 ) , and h G Bl ( n \u2212 1 ) denote entries in each row , with size ( g ) , size ( h ) , denoting the number of units and cost ( g ) and cost ( h ) denoting the unscaled cost associated with the entry .\nWe compute the following , subject\nPROOF .\nFrom earlier , we define cost ( A -- i ) = C ( Z \\ i ) .\nWe can split the optimal solution , A -- i , into three disjoint parts : xl corresponds to the midrange seller , xi corresponds to first i -- 1 sellers ( skipping agent l if l < i ) , and x -- i corresponds to last n -- i sellers ( skipping agent l if l > i ) .\nWe have :\nLet ri = scost ( xi ) and r -- i = scost ( x -- i ) .\nLet yi and y -- i be the solution vectors corresponding to scaled cost ri and r -- i in F ~ ( i -- 1 ) and B ~ ( n -- i ) , respectively .\nFrom Lemma 3 we conclude that ,\nwhere cost ( A ) is the upper-bound on C ( Z ) computed with the 2-approximation .\nAmong all equal scaled cost solutions , our dynamic program chooses the one with maximum units .\nTherefore we also have , ( size ( yi ) > size ( xi ) ) and ( size ( y -- i ) > size ( x -- i ) ) where we use shorthand size ( x ) to denote total number of units in all tuples in x. Now , define yjl = max ( ujl , M -- size ( yi ) -- size ( y -- i ) ) .\nFrom the preceding inequalities , we have yjl < xjl .\nSince ( yjl , yi , y -- i ) is also a feasible solution to the generalized knapsack problem without agent i , the value returned by Eq .\n5 is at most\nThis completes the proof .\nA naive implementation of this scheme will be inefficient because it might check ( n\u03b1 / E ) 2 pairs of elements , for any particular choice of ( l , j ) and choice of dropped agent i .\nIn the next section , we present an efficient way to compute Eq .\n5 , and eventually to compute the VCG payments .\n4.2 Improved Approximation Scheme\nOur improved approximation scheme for the winner-determination problem without agent i uses the fact that elements in F ~ ( i -- 1 ) and B ~ ( n -- i ) are sorted ; specifically , both , unscaled cost and quantity ( i.e. size ) , increases from left to right .\nAs before , let g and h denote generic entries in F ~ ( i -- 1 ) and B ~ ( n -- i ) respectively .\nTo compute Eq .\n5 , we consider all the tuple pairs , and first divide the tuples that satisfy condition size ( g ) + size ( h ) >\nl into two disjoint sets .\nFor each set we compute the best solution , and then take the best between the two sets .\nWe define a pair ( g , h ) to be feasible if size ( g ) + size ( h ) > M -- ujl .\nNow to compute Eq .\n6 , we do a forward and backward walk on F ~ ( i -- 1 ) and B ~ ( n -- i ) respectively .\nWe start from the smallest index of F ~ ( i -- 1 ) and move right , and from the highest index of B ~ ( n -- i ) and move left .\nLet ( g , h ) be the current pair .\nIf ( g , h ) is feasible , we decrement B 's pointer ( that is , move backward ) otherwise we increment F 's pointer .\nThe feasible pairs found during the walk are used to compute Eq .\n6 .\nThe complexity of this step is linear in size of F ~ ( i -- 1 ) , which is O ( n\u03b1 / E ) .\nTo compute the above equation , we transform the above problem to another problem using modified cost , which is defined as :\nThe modified cost simplifies the problem , but unfortunately the elements in F ~ ( i -- 1 ) and B ~ ( n -- i ) are no longer sorted with respect to mcost .\nHowever , the elements are still sorted in quantity and we use this property to compute Eq .\n7 .\nCall a pair ( g , h ) feasible if M -- uj +1 l < size ( g ) + size ( h ) < M -- ujl .\nDefine the feasible set of g as the elements h E B ~ ( n -- i ) that are feasible given g .\nAs the elements are sorted by quantity , the feasible set of g is a contiguous subset of B ~ ( n -- i ) and shifts left as g increases .\nFigure 4 : The feasible set of g = 3 , defined on B ~ ( n -- i ) , is { 2 , 3 , 41 when M -- uj +1\nl = 50 and M -- ujl = 60 .\nBegin and End represent the start and end pointers to the feasible set .\nTherefore , we can compute Eq .\n7 by doing a forward and backward walk on F ~ ( i -- 1 ) and B ~ ( n -- i ) respectively .\nWe walk on B ~ ( n -- i ) , starting from the highest index , using two pointers , Begin and End , to indicate the start and end of the current feasible set .\nWe maintain the feasible set as a min heap , where the key is modified cost .\nTo update the feasible set , when we increment F 's pointer ( move forward ) , we walk left on B , first using End to remove elements from feasible set which are no longer\nfeasible and then using Begin to add new feasible elements .\nFor a given g , the only element which we need to consider in g 's feasible set is the one with minimum modified cost which can be computed in constant time with the min heap .\nSo , the main complexity of the computation lies in heap updates .\nSince , any element is added or deleted at most once , there are O ( n\u03b1\u03b5 ) heap updates and the time complexity of this step is O ( n\u03b1\u03b5 log n\u03b1\u03b5 ) .\n4.3 Collecting the Pieces\nThe algorithm works as follows .\nFirst , using the 2 approximation algorithm , we compute an upper bound on C ( I ) .\nWe use this bound to scale down the tuple costs .\nUsing the scaled costs , we build the forward and backward tables corresponding to each tuple ( l , j ) .\nThe forward tables are used to compute C ( I ) .\nTo compute C ( I \\ i ) , we iterate over all the possible midrange tuples and use the corresponding forward and backward tables to compute the locally optimal solution using the above scheme .\nAmong all the locally optimal solutions we choose one with the minimum total cost .\nThe most expensive step in the algorithm is computation of C ( I \\ i ) .\nThe time complexity of this step is O ( n2\u03b5\u03b1 log n\u03b1\u03b5 ) as we have to iterate over all O ( n ) choices of tjl , for all l = ~ i , and each time use the above scheme to compute Eq .\n5 .\nIn the worst case , we might need to compute C ( I \\ i ) for all n sellers , in which case the final complexity of the algorithm will be O ( n3\u03b1\nIt is interesting to recall that T = O ( n3\u03b5 ) is the time complexity of the FPTAS to the generalized knapsack problem with all agents .\nOur combined scheme computes an approximation to the complete VCG mechanism , including payments to O ( n ) agents , in time complexity O ( T log ( n / \u03b5 ) ) , taking the no-monopoly parameter , \u03b1 , as a constant .\nThus , our algorithm performs much better than the naive scheme , which computes the VCG payment for each agent by solving a new instance of generalized knapsack problem .\nThe speed up comes from the way we solve iKnapsack \u2212 i ( B , j ) .\nTime complexity of computing iKnapsack \u2212 i ( B , j ) by creating a new dynamic programming table will be O ( n2\u03b5 ) but by using the forward and backward tables , the complexity is reduced to O ( n\u03b5 log n\u03b5 ) .\nWe can further improve the time complexity of our algorithm by computing Eq .\n5 more efficiently .\nCurrently , the algorithm uses heap , which has logarithmic update time .\nIn worst case , we can have two heap update operations for each element , which makes the time complexity super linear .\nIf we can compute Eq .\n5 in linear time then the complexity of computing the VCG payment will be same as the complexity of solving a single generalized knapsack problem .\n5 .\nCONCLUSIONS\nWe presented a fully polynomial-time approximation scheme for the single-good multi-unit auction problem , using marginal decreasing piecewise constant bidding language .\nOur scheme is both approximately efficient and approximately strategyproof within any specified factor \u03b5 > 0 .\nAs such it is an example of computationally tractable \u03b5-dominance result , as well as an example of a non-trivial but approximable allocation problem .\nIt is particularly interesting that we are able to compute the payments to n agents in a VCG-based mechanism in worst-case time O ( T log n ) , where T is the time complexity to compute the solution to a single allocation problem ."} {"id": "C-18", "title": "", "abstract": "", "keyphrases": ["malwar", "swarm worm", "emerg intellig", "slammer worm", "local commun mechan", "zachik", "prng method", "pre-gener target list", "distribut intellig", "intrus detect", "countermeasur system", "emerg behavior", "internet worm", "swarm intellig"], "prmu": [], "lvl-1": "An Initial Analysis and Presentation of Malware Exhibiting Swarm-Like Behavior Fernando C.Col\u00b4on Osorio Wireless System Security Research Laboratory (W.S.S.R.L.) 420 Lakeside Avneue Marlboro, Massachusetts 01752 fcco@cs.wpi.edu Zachi Klopman Wireless System Security Research Laboratory (W.S.S.R.L.) 420 Lakeside Avneue Marlboro, Massachusetts 01752 zachi@cs.wpi.edu ABSTRACT The Slammer, which is currently the fastest computer worm in recorded history, was observed to infect 90 percent of all vulnerable Internets hosts within 10 minutes.\nAlthough the main action that the Slammer worm takes is a relatively unsophisticated replication of itself, it still spreads so quickly that human response was ineffective.\nMost proposed countermeasures strategies are based primarily on rate detection and limiting algorithms.\nHowever, such strategies are being designed and developed to effectively contain worms whose behaviors are similar to that of Slammer.\nIn our work, we put forth the hypothesis that next generation worms will be radically different, and potentially such techniques will prove ineffective.\nSpecifically, we propose to study a new generation of worms called Swarm Worms, whose behavior is predicated on the concept of emergent intelligence.\nEmergent Intelligence is the behavior of systems, very much like biological systems such as ants or bees, where simple local interactions of autonomous members, with simple primitive actions, gives rise to complex and intelligent global behavior.\nIn this manuscript we will introduce the basic principles behind the idea of Swarm Worms, as well as the basic structure required in order to be considered a swarm worm.\nIn addition, we will present preliminary results on the propagation speeds of one such swarm worm, called the ZachiK worm.\nWe will show that ZachiK is capable of propagating at a rate 2 orders of magnitude faster than similar worms without swarm capabilities.\nCategories and Subject Descriptors C.2.4 [Distributed Systems]: Intrusion Detection; D.4.6 [Security and Protection]: Invasive software General Terms Experimentation, Security 1.\nINTRODUCTION AND PREVIOUSWORK In the early morning hours (05:30 GMT) of January 25, 2003 the fastest computer worm in recorded history began spreading throughout the Internet.\nWithin 10 minutes after the first infected host (patient zero), 90 percent of all vulnerable hosts had been compromised creating significant disruption to the global Internet infrastructure.\nVern Paxson of the International Computer Science Institute and Lawrence Berkeley National Laboratory in its analysis of Slammer commented: The Slammer worm spread so quickly that human response was ineffective, see [4] The interesting part, from our perspective, about the spread of Slammer is that it was a relatively unsophisticated worm with benign behavior, namely self-reproduction.\nSince Slammer, researchers have explored the behaviors of fast spreading worms, and have designed countermeasures strategies based primarily on rate detection and limiting algorithms.\nFor example, Zou, et al., [2], proposed a scheme where a Kalman filter is used to detect the early propagation of a worm.\nOther researchers have proposed the use of detectors where rates of Destination Unreachable messages are monitored by firewalls, and a significant increase beyond normal, alerts the organization to the potential presence of a worm.\nHowever, such strategies suffer from the fighting the last War syndrome.\nThat is, systems are being designed and developed to effectively contain worms whose behaviors are similar to that of Slammer.\nIn the work described here, we put forth the hypothesis that next generation worms will be different, and therefore such techniques may have some significant limitations.\nSpecifically, we propose to study a new generation of worms called Swarm Worms, whose behavior is predicated on the concept of emergent intelligence.\nThe concept of emergent intelligence was first studied in association with biological systems.\nIn such studies, early researchers discovered a variety of interesting insect or animal behaviors in the wild.\nA flock of birds sweeps across the sky.\nA group of ants forages for food.\nA school of fish swims, turns, flees together away from a predator, ands so forth.\nIn general, this kind of aggregate motion has been called swarm behavior.\nBiologists, and computer scientists in the field of artificial intelligence have studied such biological swarms, and 323 attempted to create models that explain how the elements of a swarm interact, achieve goals, and evolve.\nMoreover, in recent years the study of swarm intelligence has become increasingly important in the fields of robotics, the design of Mobile ad-hoc Networks (MANETS), the design of Intrusion Detection Systems, the study of traffic patterns in transportation systems, in military applications, and other areas, see [3].\nThe basic concepts that have been developed over the last decade to explain swarms, and swarm behavior include four basic components.\nThese are: 1.\nSimplicity of logic & actions: A swarm is composed of N agents whose intelligence is limited.\nAgents in the swarm use simple local rules to govern their actions.\nSome models called this primitive actions or behaviors; 2.\nLocal Communication Mechanisms: Agents interact with other members in the swarm via simple local communication mechanisms.\nFor example, a bird in a flock senses the position of adjacent bird and applies a simple rule of avoidance and follow.\n3.\nDistributed control: Autonomous agents interact with their environment, which probably consists of other agents, but act relatively independently from all other agents.\nThere is no central command or leader, and certainly there is no global plan.\n4.\nEmergent Intelligence: Aggregate behavior of autonomous agents results in complex intelligent behaviors; including self-organization.\nIn order to understand fully the behavior of such swarms it is necessary to construct a model that explains the behavior of what we will call generic worms.\nThis model, which extends the work by Weaver [5] is presented here in section 2.\nIn addition, we intend to extend said model in such a way that it clearly explains the behaviors of this new class of potentially dangerous worms called Swarm Worms.\nSwarm Worms behave very much like biological swarms and exhibit a high degree of learning, communication, and distributed intelligence.\nSuch Swarm Worms are potentially more harmful than their similar generic counterparts.\nSpecifically, the first instance, to our knowledge, of such a learning worm was created, called ZachiK.\nZachiK is a simple password cracking swarm worm that incorporates different learning and information sharing strategies.\nSuch a swarm worm was deployed in both a local area network of thirty-(30) hosts, as well as simulated in a 10,000 node topology.\nPreliminary results showed that such worms are capable of compromising hosts at rates up to two orders of magnitude faster than their generic counterpart.\nThe rest of this manuscript is structure as follows.\nIn section 2 an abstract model of both generic worms as well as swarm worms is presented.\nThis model is used in section 2.6 to described the first instance of a swarm worm, ZachiK.\nIn section 4, preliminary results via both empirical measurements as well as simulation is presented.\nFinally, in section 5 our conclusions and insights into future work are presented.\n2.\nWORM MODELING In order to study the behavior of swarm worms in general, it is necessary to create a model that realistically reflects the structure of worms and it is not necessarily tied to a specific instance.\nIn this section, we described such a model where a general worm is describe as having four-(4) basic components or subfunctions.\nBy definition, a worm is a selfcontained, self propagating program.\nThus, in simple terms, it has two main functions: that which propagates and that which does other things.\nWe propose that there is a third broad functionality of a worm, that of self-preservation.\nWe also propose that the other functionality of a worm may be more appropriately categorized as Goal-Based Actions (GBA), as whatever functionality included in a worm will naturally be dependent on whatever goals (and subgoals) the author has.\nThe work presented by Weaver et.\nal. in [5] provides us with mainly an action and technique based taxonomy of computer worms, which we utilize and further extend here.\n2.1 Propagation The propagation function itself may be broken down into three actions: acquire target, send scan, and infect target.\nAcquiring the target simply means picking a host to attack next.\nSending a scan involves checking to see if that host is receptive to an infection attempt, since IP-space is sparsely populated.\nThis may involve a simple ping to check if the host is alive or a full out vulnerability assessment.\nInfecting the target is the actual method used to send the worm code to the new host.\nIn algorithm form: propagate() { host := acquire_target() success := send_scan(host) if( success ) then infect(host) endif } In the case of a simple worm which does not first check to see if the host is available or susceptible (such as Slammer), the scan method is dropped: propagate() { host := acquire_target() infect(host) } Each of these actions may have an associated cost to its inclusion and execution, such as increased worm size and CPU or network load.\nDepending on the authors needs or requirements, these become limiting factors in what may be included in the worm``s actions.\nThis is discussed further after expanding upon these actions below.\n2.2 Target Acquisition: The Target Acquisition phase of our worm algorithm is built directly off of the Target Discovery section in [5].\nWeaver et.\nal. taxonomize this task into 5 separate categories.\nHere, we further extend their work through parameterization.\nScanning: Scanning may be considered an equation-based method for choosing a host.\nAny type of equation may be used to arrive at an IP address, but there are three main types seen thus far: sequential, random, and local preference.\nSequential scanning is exactly as it sounds: start at an IP address and increment through all the IP space.\nThis could carry with it the options of which IP to start with (user chosen value, random, or based on IP of infected host) and 324 how many times to increment (continuous, chosen value, or subnet-based).\nRandom scanning is completely at random (depending on the chosen PRNG method and its seed value).\nLocal preference scanning is a variance of either Sequential or Random, whereby it has a greater probability of choosing a local IP address over a remote one (for example, the traditional 80/20 split).\nPre-generated Target Lists: Pre-generated Target Lists, or so called hit-lists, could include the options for percentage of total population and percentage wrong, or just number of IPs to include.\nImplicit to this type is the fact that the list is divided among a parent and its children, avoiding the problem of every instance hitting the exact same machines.\nExternally Generated Target Lists: Externally generated target lists depend on one or more external sources that can be queried for host data.\nThis will involve either servers that are normally publicly available, such as gaming meta-servers, or ones explicitly setup by the worm or worm author.\nThe normally available meta-servers could have parameters for rates of change, such as many popping up at night or leaving in the morning.\nEach server could also have a maximum queries/second that it would be able to handle.\nThe worm would also need a way of finding these servers, either hard-coded or through scanning.\nInternal Target Lists: Internal Target Lists are highly dependent on the infected host.\nThis method could parameterize the choice of how much info is on the host, such as all machines in subnet, all windows boxes in subnet, particular servers, number of internal/external, or some combination.\nPassive: Passive methods are determined by normal interactions between hosts.\nParameters may include a rate of interaction with particular machines, internal/external rate of interaction, or subnet-based rate of interaction.\nAny of these methods may also be combined to produce different types of target acquisition strategies.\nFor example, the a worm may begin with an initial hit-list of 100 different hosts or subnets.\nOnce it has exhausted its search using the hit-list, it may then proceed to perform random scanning with a 50% local bias.\nIt is important to note, however, that the resource consumption of each method is not the same.\nDifferent methods may require the worm to be large, such as the extra bytes required by a hit-list, or to take more processing time, such as by searching the host for addresses of other vulnerable hosts.\nFurther research and analysis should be performed in this area to determine associated costs for using each method.\nThe costs could then be used in determining design tradeoffs that worm authors engage at.\nFor example, hit lists provide a high rate of infection, but at a high cost of worm payload size.\n2.2.1 Sending a Scan The send scan function tests to see if the host is available for infection.\nThis can be as simple as checking if the host is up on the network or as complex as checking if the host is vulnerable to the exploit which will be used.\nThe sending of a scan before attempted infection can increase` the scanning rate if the cost for failing an infection is greater than the cost of failing a scan or sending a scan plus infection; and failures are more frequent than successes.\nOne important parameter to this would be the choice of transport protocol (TCP/UDP) or just simply the time for one successful scan and time for one failed scan.\nAlso, whether or not it tests for the host to be up or if it is a full test for the vulnerability (or for multiple vulnerabilities).\n2.2.2 Infection Vector (IV) The particular infection vector used to access the remote host is mainly dependent on the particular vulnerability chosen to exploit.\nIn a non-specific sense, it is dependent on the transport protocol chosen to use and the message size to be sent.\nSection 3 of [5] also proposes three particular classes of IV: Self-carried, second channel, and embedded.\n2.3 Self Preservation The Self Preservation actions of a worm may take many forms.\nIn the wild, worms have been observed to disable anti-virus software or prevent sending itself to certain antivirusknown addresses.\nThey have also been seen to attempt disabling of other worms which may be contending for the same system.\nWe also believe that a time-based throttled scanning may help the worm to slip under the radar.\nWe also propose a decoy method, whereby a worm will release a few children that cause a lot of noise so that the parent is not noticed.\nIt has also been proposed [5] that a worm cause damage to its host if, and only if, it is disturbed in some way.\nThis module could contain parameters for: probability of success in disabling anti-virus or other software updates, probability of being noticed and thus removed, or hardening of the host against other worms.\n2.4 Goal-Based Actions A worm``s GBA functionality depends on the author``s goal list.\nThe Payloads section of [5] provides some useful suggestions for such a module.\nThe opening of a back-door can make the host susceptible to more attacks.\nThis would involve a probability of the back-door being used and any associated traffic utilization.\nIt could also provide a list of other worms this host is now susceptible to or a list of vulnerabilities this host now has.\nSpam relays and HTTP-Proxies of course have an associated bandwidth consumption or traffic pattern.\nInternet DoS attacks would have a set time of activation, a target, and a traffic pattern.\nData damage would have an associated probability that the host dies because of the damage.\nIn Figure 1, this general model of a worm is summarized.\nPlease note that in this model there is no learning, no, or very little, sharing of information between worm instances, and certainly no coordination of actions.\nIn the next section we expand the model to include such mechanisms, and hence, arrive at the general model of a swarm worm.\n2.5 Swarms - General Model As described in section 1, the basic characteristics that distinguished swarm behavior from simply what appears to be collective coordinated behaviors are four basic attributes.\nThese are: 1.\nSimplicity of logic & actions; 2.\nLocal Communication Mechanisms; 3.\nDistributed control; and 4.\nEmergent Intelligence, including self-organization.\n325 Structure Function/Example Infection, Infection Vector Executable is run Protection & Stealthiness Disable McAfee (Staying Alive) Propagation Send email to everyone in address book Goal Based Action (GBA) DDoS www.sco.com Everything else, often called payload Figure 1: General Worm Model In this work we aggregate all of these attributes under the general title of Learning, Communication, and Distributed Control.\nThe presence of these attributes distinguishes swarm worms from otherwise regular worms, or other types of malware such as Zombies.\nIn figure ??\n, the generic model of a worm is expanded to included these set of actions.\nWithin this context then, a worm like Slammer cannot be categorized as a swarm worm due to the fact that new instances of the worm do not coordinate their actions or share information.\nOn the other hand, Zombies and many other forms of DDoS, which at first glance may be consider swarm worms are not.\nThis is simply due to fact that in the case of Zombies, control is not distributed but rather centralized, and no emergent behaviors arise.\nThe latter, the potential emergence of intelligence or new behaviors is what makes swarm worms so potentially dangerous.\nFinally, when one considers the majority of recent disruptions to the Internet Infrastructure, and in light of our description of swarm attacks, then said disruptions can be easily categorized as precursors to truly swarm behavior.\nSpecifically, \u2022 DDOS - Large number of compromised hosts send useless packets requiring processing (Stacheldraht, http : //www.cert.org/ incidentnotes/IN \u2212 99 \u2212 04.\nhtml).\nDDoS attacks are the early precursors to Swarm Attacks due to the large number of agents involved.\n\u2022 Code Red CrV1, Code Red II, Nimbda - Exhibit early notions of swarm attacks, including a backdoor communication channel.\n\u2022 Staniford & Paxson in How to Own the Internet in Your Spare Time?\nexplore modifications to CrV1, Code Red I, II with a swarm like type of behavior.\nFor example, they speculate on new worms which employ direct worm-to-worm communication, and employ programmable updates.\nFor example the Warhol worm, and Permutation-Scanning (self coordinating) worms.\n2.6 Swarm Worm: the details In considering the creation of what we believed to be the first Swarm Worm in existence, we wanted to adhere as close as possible to the general model presented in section ??\nwhile at the same time facilitating large scale analysis, both empirical and through simulations, of the behavior of the swarm.\nFor this reason, we selected as the first instance Structure Function/Example Infection, Infection Vector Executable is run Protection & Stealthiness Disable McAfee (Staying Alive) Propagation Send email to everyone in address book Learning, Communication, Pheromones/Flags (Test and Distributed Control if Worm is already present) Time bombs, Learning Algorithms, IRC channel Goal Based Action (GBA) DDoS www.sco.com Everything else, often called payload Figure 2: General Model of a Swarm Worm of the swarm a simple password cracking worm.\nThe objective of this worm simply is to infect a host by sequentially attempting to login into the host using well known passwords (dictionary attack), passwords that have been discovered previously by any member of the swarm, and random passwords.\nOnce, a host is infected, the worm will create communication channels with both its known neighbors at that time, as well as with any offsprings that it successfully generates.\nIn this context a successful generation of an offspring means simply infecting a new host and replicating an exact copy of itself in such a host.\nWe call this swarm worm the ZachiK worm in honor of one of its creators.\nAs it can be seen from this description, the ZachiK worm exhibits all of the elements described before.\nIn the following sections, we described in detail each one of the elements of the ZachiK worm.\n2.7 Infection Vector The infection vector used for ZachiK worm is the secure shell protocol SSH.\nA modified client which is capable of receiving passwords from the command line was written, and integrated with a script that supplies it with various passwords: known and random.\nWhen a password is found for an appropriate target, the infection process begins.\nAfter the root password of a host is discovered, the worm infects the target host and replicates itself.\nThe worm creates a new directory in the target host, copies the modified ssh client, the script, the communications servers, and the updated versions of data files (list of known passwords and a list of current neighbors).\nIt then runs the modified script on the newly infected hosts, which spawns the communications server, notifies the neighbors and starts looking for new targets.\nIt could be argued, correctly, that the ZachiK worm can be easily defeated by current countermeasure techniques present on most systems today, such as disallowing direct root logins from the network.\nWithin this context ZachiK can quickly be discarded as very simple and harmless worm that does not require further study.\nHowever, the reader should consider the following: 1.\nZachiK can be easily modified to include a variety of infection vectors.\nFor example, it could be programmed to guess common user names and their passwords; gain 326 access to a system, then guess the root password or use other well know vulnerabilities to gain root privileges; 2.\nZachiK is a proof of concept worm.\nThe importance of ZachiK is that it incorporates all of the behaviors of a swarm worm including, but not restricted to, distributed control, communication amongst agents, and learning; 3.\nZachiK is composed of a large collection of agents operating independently which lends itself naturally to parallel algorithms such as a parallel search of the IPV4 address space.\nWithin this context, SLAMMER, does incorporate a parallel search capability of potentially susceptible addresses.\nHowever, unlike ZachiK, the knowledge discovered by the search is never shared amongst the agents.\nFor this reasons, and many others, one should not discard the potential of this new class of worms but rather embrace its study.\n2.8 Self-Preservation In the case of ZachiK worm, the main self-preservation techniques used are simply keeping the payload small.\nIn this context, this simply means restricting the number of passwords that an offspring inherits, masquerading worm messages as common http requests, and restricting the number of neighbors to a maximum of five-(5).\n2.9 Propagation Choosing the next target(s) in an efficient matter requires thought.\nIn the past, known and proposed worms, see [5], have applied propagation techniques that varied.\nThese include: strictly random selection of a potential vulnerable host; target lists of vulnerable hosts; locally biased random selection (select a host target at random from a local subnet); and a combination of some or all of the above.\nIn our test and simulation environments, we will apply a combination of locally biased and totally random selection of potential vulnerable hosts.\nHowever, due to the fact that the ZachiK worm is a swarm worm, address discovery (that is when non-existent addresses are discovered) information will be shared amongst members of the swarm.\nThe infection and propagation threads do the following set of activities repeatedly: \u2022 Choose an address \u2022 Check the validity of the address \u2022 Choose a set of passwords \u2022 Try infecting the selected host with this set of passwords As described earlier, choosing an address makes use of a combination of random selection, local bias, and target lists.\nSpecifically, to choose an address, the instance may either: \u2022 Generate a new random address \u2022 Generate an address on the local network \u2022 Pick an address from a handoff list The choice is made randomly among these options, and can be varied to test the dependency of propagation on particular choices.\nPassword are either chosen from the list of known passwords or newly generated.\nWhen an infection of a valid address fails, it is added to a list of handoffs, which is sent to the neighbors to try to work on.\n2.10 Learning, CommunicationandDistributed Control 2.10.1 Communication The concept of a swarm is based on transfer of information amongst neighbors, which relay their new incoming messages to their neighbors, and so on until every worm instance in the swarm is aware of these messages.\nThere are two classes of messages: data or information messages and commands.\nThe command messages are meant for an external user (a.k.a., hackers and/or crackers) to control the actions of the instances, and are currently not implemented.\nThe information messages are currently of three kinds: new member, passwords and exploitable addresses (handoffs).\nThe new member messages are messages that a new instance sends to the neighbors on its (short) list of initial neighbors.\nThe neighbors then register these instances in their neighbor list.\nThese are messages that form the multi-connectivity of the swarm, and without them, the topology will be a treelike structure, where eliminating a single node would cause the instances beneath it to be inaccessible.\nThe passwords messages inform instances of newly discovered passwords, and by informing all instances, the swarm as whole collects this information, which allows it to infect new instances more effectively.\nThe handoffs messages inform instances of valid addresses that could not be compromised (fail at breaking the password for the root account).\nSince the address space is rather sparse, it takes a relatively long time (i.e. many trials) to discover a valid address.\nTherefore, by handing off discovered valid addresses, the swarm is (a) conserving energy by not re-discovering the same addresses (b) attacking more effectively.\nIn a way, this is a simple instance of coordinated activity of a swarm.\n2.10.2 Coordination When a worm instance is born, it relays its existence to all neighbors on its list.\nThe main thread then spawns a few infection threads, and continues to handle incoming messages (registering neighbors, adding new passwords, receiving addresses and relaying these messages).\n2.10.3 Distributed Control Control in the ZachiK worm is distributed in the sense that each instance of the worm performs a set of actions independently of every other instance while at the same time benefiting from the learning achieve by its immediate neighbors.\n2.11 Goal Based Actions The first instantiation of the ZachiK worm has two basic goals.\nThese are: (1) propagate, and (2) discover and share with members of th swarm new root passwords.\n3.\nEXPERIMENTAL DESIGN In order to verify our hypothesis that Swarm Worms are more capable, and therefore dangerous than other well known 327 worms, a network testbed was created, and a simulator, capable of simulating large scale Internet-like topologies (IPV4 space), was developed.\nThe network testbed consisted of a local area network of 30 Linux based computers.\nThe simulator was written in C++ .\nThe simple swarm worm described in section 2.6 was used to infect patient-zero, and then the swarm worm was allowed to propagate via its own mechanisms of propagation, distributed control, and swarm behaviors.\nIn the case of a simple local area network of 30 computers, six-(6) different root passwords out of a password space of 4 digits (10000 options) were selected.\nAt the start of the experiment a single known password is known, that of patient-zero.\nAll shared passwords are distributed randomly across all nodes.\nSimilarly, in the case of the simulation, a network topology of 10,000 hosts, whose addresses were selected randomly across the IPV4 space, was constructed.\nWithin that space, a total of 200 shared passwords were selected and distributed either randomly and/or targeted to specific network topologies subnets.\nFor example, in one of our simulation runs, the network topology consisted of 200 subnets each containing 50 hosts.\nIn such a topology, shared passwords were distributed across subnets where a varying percentage of passwords were shared across subnets.\nThe percentages of shared passwords used was reflective of early empirical studies, where up to 39.7% of common passwords were found to be shared.\n4.\nRESULTS In Figure 3, the results comparing Swarm Attack behavior versus that of a typical Malform Worm for a 30 node LAN are presented.\nIn this set of empirical runs, six-(6) shared passwords were distributed at random across all nodes from a possible of 10,000 unknown passwords.\nThe data presented reflects the behaviors of a total of three-(3) distinct classes of worm or swarm worms.\nThe class of worms presented are as follows: \u2022 I-NS-NL:= Generic worm, independent (I), no learning/memoryless (NL), and no sharing of information with neighbors or offsprings (NS); \u2022 S-L-SP:= Swarm Worm (S), learning (L), keeps list of learned passwords, and sharing of passwords (SP) across nearest neighbors and offsprings; and \u2022 S-L-SP&A:= Swarm Worm (S), learning (L), keeps list of learned passwords, and sharing of passwords and existent addresses (SP&A) across nearest neighbors and offsprings.\nAs it is shown in Figure 3, the results validate our original hypothesis that swarm worms are significantly more efficient and dangerous than generic worms.\nIn this set of experiments, the sharing of passwords provides an order of magnitude improvement over a memoryless random worm.\nSimilarly, a swarm worm that shares passwords and addresses is approximately two orders of magnitude more efficient than its generic counterpart.\nIn Figure 3, a series of discontinuities can be observed.\nThese discontinuities are an artifact of the small sample space used for this experiment.\nBasically, as soon as a password is broken, all nodes sharing that specific password are infected within a few seconds.\nNote that it is trivial for a swarm worm to scan and discovered a small shared password space.\nIn Figure 4, the simulation results comparing Swarm Attack Behavior versus that of a Generic Malform Worm are presented.\nIn this set of simulation runs, a network topology of 10,000 hosts, whose addresses were selected randomly across the IPV4 space, was constructed.\nWithin that space, a total of 200 shared passwords were selected and distributed either randomly and/or targeted to specific network topologies subnets.\nThe data presented reflects the behaviors of three-(3) distinct classes of worm or swarm worms and two(2) different target host selection scanning strategies (random scanning and local bias).\nThe amount of local bias was varied across multiple simulation runs.\nThe results presented are aggregate behaviors.\nIn general the following class of Generic Worms and Swarm Worms were simulated.\nAddress Scanning: \u2022 Random:= addresses are selected at random from a subset of the IPV4 space, namely, a 224 address space; and \u2022 Local Bias:= addresses are selected at random from either a local subnet (256 addresses) or from a subset of the IPV4 space, namely, a 224 address space.\nThe percentage of local bias is varied across multiple runs.\nLearning, Communication & Distributed Control \u2022 I-NL-NS: Generic worm, independent (I), no learning/ memoryless (NL), and no sharing of information with neighbors or offsprings (NS); \u2022 I-L-OOS: Generic worm, independent (I), learning/ memoryless (L), and one time sharing of information with offsprings only (OOS); \u2022 S-L-SP:= Swarm Worm (S), learning (L), keeps list of learned passwords, and sharing of passwords (SP) across nearest neighbors and offsprings; \u2022 S-L-S&AOP:= Swarm Worm (S), learning (L), keeps list of learned passwords, and sharing of addresses with neighbors and offsprings, shares passwords one time only (at creation) with offsprings(SA&OP); \u2022 S-L-SP&A:= Swarm Worm (S), learning (L), keeps list of learned passwords, and sharing of passwords and existent addresses (SP&A) across nearest neighbors and offsprings.\nAs it is shown in Figure 4, the results are consistent with our set of empirical results.\nIn addition, the following observations can be made.\n1.\nLocal preference is incredibly effective.\n2.\nShort address handoffs are more effective than long ones.\nWe varied the size of the list allowed in the sharing of addresses; the overhead associated with a long address list is detrimental to the performance of the swarm worm, as well as to its stealthiness; 3.\nFor the local bias case, sharing valid addresses of susceptible host, S-L-S&AOP worm (recall, the S-L-S&AOP swarm shares passwords, one time only, with offsprings 328 at creation time) is more effective than sharing passwords in the case of the S-L-SP swarm.\nIn this case, we can think of the swarm as launching a distributeddictionary attack: different segments of the swarm use different passwords to try to break into susceptible uninfected host.\nIn the local bias mode, early in the life of the swarm, address-sharing is more effective than password-sharing, until most subnets are discovered.\nThen the targeting of local addresses assists in discovering the susceptible hosts, while the swarm members need to waste time rediscovering passwords; and 4.\nInfecting the last 0.5% of nodes takes a very long time in non-local bias mode.\nBasically, the shared password list across subnets has been exhausted, and the swarm reverts to simply a random discovery of password algorithm.\nFigure 3: Swarm Attack Behavior vs. Malform Worm: Empirical Results, 30node LAN Figure 4: Swarm Attack Behavior vs. Malform Worm: Simulation Results 5.\nSUMMARY AND FUTURE WORK In this manuscript, we have presented an abstract model, similar in some aspects to that of Weaver [5], that helps explain the generic nature of worms.\nThe model presented in section 2 was extended to incorporate a new class of potentially dangerous worms called Swarm Worms.\nSwarm Worms behave very much like biological swarms and exhibit a high degree of learning, communication, and distributed intelligence.\nSuch Swarm Worms are potentially more harmful than their generic counterparts.\nIn addition, the first instance, to our knowledge, of such a learning worm was created, called ZachiK.\nZachiK is a simple password cracking swarm worm that incorporates different learning and information sharing strategies.\nSuch a swarm worm was deployed in both a local area network of thirty-(30) hosts, as well as simulated in a 10,000 node topology.\nPreliminary results showed that such worms is capable of compromising hosts a rates up to 2 orders of magnitude faster than its generic counterpart while retaining stealth capabilities.\nThis work opens up a new area of interesting problems.\nSome of the most interesting and pressing problems to be consider are as follows: \u2022 Is it possible to apply some of learning concepts developed over the last ten years in the areas of swarm intelligence, agent systems, and distributed control to the design of sophisticated swarm worms in such a way that true emergent behavior takes place?\n\u2022 Are the current techniques being developed in the design of Intrusion Detection & CounterMeasure Systems and Survivable systems effective against this new class of worms?\n; and \u2022 What techniques, if any, can be developed to create defenses against swarm worms?\n6.\nACKNOWLEDGMENTS This work was conducted as part of a larger effort in the development of next generation Intrusion Detection & CounterMeasure Systems at WSSRL.\nThe work is conducted under the auspices of Grant ACG-2004-06 by the Acumen Consulting Group, Inc., Marlboro, Massachusetts.\n7.\nREFERENCES [1] C.C. Zou, L. Gao, W. G., and Towsley, D. Monitoring and early warning for internet worms.\nIn 10th ACM Conference on Computer and Communications Security, Washington, DC (October 2003).\n[2] Liu, S., and Passino, K. Swarm intelligence: Literature overview.\nIn Dept. of Electrical Engineering, The Ohio State University, 2015 Neil Ave., Columbus, OH 43210 (2000).\n[3] Moore, D., Paxson, V., Savage, S., Shannon, C., Staniford, S., and Weaver, N.\nThe spread of the saphire/slammer worm.\nTech.\nrep., A joint effort of CAIDA, ICSI, Silicon Defense, UC Berkeley EECS and UC San Diego CSE, 2003.\n[4] Weaver, N., Paxson, V., Staniford, S., and Cunningham, R.\nA taxonomy of computer worms.\nIn Proceedings of the ACM Workshop on Rapid Malware (WORM) (2003).\n329", "lvl-3": "An Initial Analysis and Presentation of Malware Exhibiting Swarm-Like Behavior\nABSTRACT\nThe Slammer , which is currently the fastest computer worm in recorded history , was observed to infect 90 percent of all vulnerable Internets hosts within 10 minutes .\nAlthough the main action that the Slammer worm takes is a relatively unsophisticated replication of itself , it still spreads so quickly that human response was ineffective .\nMost proposed countermeasures strategies are based primarily on rate detection and limiting algorithms .\nHowever , such strategies are being designed and developed to effectively contain worms whose behaviors are similar to that of Slammer .\nIn our work , we put forth the hypothesis that next generation worms will be radically different , and potentially such techniques will prove ineffective .\nSpecifically , we propose to study a new generation of worms called '' Swarm Worms '' , whose behavior is predicated on the concept of '' emergent intelligence '' .\nEmergent Intelligence is the behavior of systems , very much like biological systems such as ants or bees , where simple local interactions of autonomous members , with simple primitive actions , gives rise to complex and intelligent global behavior .\nIn this manuscript we will introduce the basic principles behind the idea of '' Swarm Worms '' , as well as the basic structure required in order to be considered a '' swarm worm '' .\nIn addition , we will present preliminary results on the propagation speeds of one such swarm worm , called the ZachiK worm .\nWe will show that ZachiK is capable of propagating at a rate 2 orders of magnitude faster than similar worms without swarm capabilities .\n1 .\nINTRODUCTION AND PREVIOUS WORK\nIn the early morning hours ( 05:30 GMT ) of January 25 , 2003 the fastest computer worm in recorded history began spreading throughout the Internet .\nWithin 10 minutes after the first infected host ( patient zero ) , 90 percent of all vulnerable hosts had been compromised creating significant disruption to the global Internet infrastructure .\nVern Paxson of the International Computer Science Institute and Lawrence Berkeley National Laboratory in its analysis of Slammer commented : '' The Slammer worm spread so quickly that human response was ineffective '' , see [ 4 ] The interesting part , from our perspective , about the spread of Slammer is that it was a relatively unsophisticated worm with benign behavior , namely self-reproduction .\nSince Slammer , researchers have explored the behaviors of fast spreading worms , and have designed countermeasures strategies based primarily on rate detection and limiting algorithms .\nFor example , Zou , et al. , [ 2 ] , proposed a scheme where a Kalman filter is used to detect the early propagation of a worm .\nOther researchers have proposed the use of detectors where rates of '' Destination Unreachable '' messages are monitored by firewalls , and a significant increase beyond '' normal '' , alerts the organization to the potential presence of a worm .\nHowever , such strategies suffer from the '' fighting the last War '' syndrome .\nThat is , systems are being designed and developed to effectively contain worms whose behaviors are similar to that of Slammer .\nIn the work described here , we put forth the hypothesis that next generation worms will be different , and therefore such techniques may have some significant limitations .\nSpecifically , we propose to study a new generation of worms called '' Swarm Worms '' , whose behavior is predicated on the concept of '' emergent intelligence '' .\nThe concept of emergent intelligence was first studied in association with biological systems .\nIn such studies , early researchers discovered a variety of interesting insect or animal behaviors in the wild .\nA flock of birds sweeps across the sky .\nA group of ants forages for food .\nA school of fish swims , turns , flees together away from a predator , ands so forth .\nIn general , this kind of aggregate motion has been called '' swarm behavior . ''\nBiologists , and computer scientists in the field of artificial intelligence have studied such biological swarms , and\nattempted to create models that explain how the elements of a swarm interact , achieve goals , and evolve .\nMoreover , in recent years the study of '' swarm intelligence '' has become increasingly important in the fields of robotics , the design of Mobile Ad-Hoc Networks ( MANETS ) , the design of Intrusion Detection Systems , the study of traffic patterns in transportation systems , in military applications , and other areas , see [ 3 ] .\nThe basic concepts that have been developed over the last decade to explain '' swarms , and '' swarm behavior '' include four basic components .\nThese are :\n1 .\nSimplicity of logic & actions : A swarm is composed of N agents whose intelligence is limited .\nAgents in the swarm use simple local rules to govern their actions .\nSome models called this primitive actions or behaviors ; 2 .\nLocal Communication Mechanisms : Agents interact with other members in the swarm via simple '' local '' communication mechanisms .\nFor example , a bird in a flock senses the position of adjacent bird and applies a simple rule of avoidance and follow .\n3 .\nDistributed control : Autonomous agents interact with their environment , which probably consists of other agents , but act relatively independently from all other agents .\nThere is no central command or leader , and certainly there is no global plan .\n4 . ''\nEmergent Intelligence '' : Aggregate behavior of autonomous agents results in complex '' intelligent '' behaviors ; including self-organization '' .\nIn order to understand fully the behavior of such swarms it is necessary to construct a model that explains the behavior of what we will call generic worms .\nThis model , which extends the work by Weaver [ 5 ] is presented here in section 2 .\nIn addition , we intend to extend said model in such a way that it clearly explains the behaviors of this new class of potentially dangerous worms called Swarm Worms .\nSwarm Worms behave very much like biological swarms and exhibit a high degree of learning , communication , and distributed intelligence .\nSuch Swarm Worms are potentially more harmful than their similar generic counterparts .\nSpecifically , the first instance , to our knowledge , of such a learning worm was created , called ZachiK .\nZachiK is a simple password cracking swarm worm that incorporates different learning and information sharing strategies .\nSuch a swarm worm was deployed in both a local area network of thirty - ( 30 ) hosts , as well as simulated in a 10,000 node topology .\nPreliminary results showed that such worms are capable of compromising hosts at rates up to two orders of magnitude faster than their generic counterpart .\nThe rest of this manuscript is structure as follows .\nIn section 2 an abstract model of both generic worms as well as swarm worms is presented .\nThis model is used in section 2.6 to described the first instance of a swarm worm , ZachiK .\nIn section 4 , preliminary results via both empirical measurements as well as simulation is presented .\nFinally , in section 5 our conclusions and insights into future work are presented .\n2 .\nWORM MODELING\n2.1 Propagation\n2.2 Target Acquisition :\n2.2.1 Sending a Scan\n2.2.2 Infection Vector ( IV )\n2.3 Self Preservation\n2.4 Goal-Based Actions\n2.5 Swarms - General Model\n2.6 Swarm Worm : the details\n2.7 Infection Vector\n2.8 Self-Preservation\n2.9 Propagation\n2.10.1 Communication\n2.10.2 Coordination\n2.10.3 Distributed Control\n2.11 Goal Based Actions\n3 .\nEXPERIMENTAL DESIGN\n4 .\nRESULTS\nAddress Scanning :\n5 .\nSUMMARY AND FUTURE WORK\nIn this manuscript , we have presented an abstract model , similar in some aspects to that of Weaver [ 5 ] , that helps explain the generic nature of worms .\nThe model presented in section 2 was extended to incorporate a new class of potentially dangerous worms called Swarm Worms .\nSwarm Worms behave very much like biological swarms and exhibit a high degree of learning , communication , and distributed intelligence .\nSuch Swarm Worms are potentially more harmful than their generic counterparts .\nIn addition , the first instance , to our knowledge , of such a learning worm was created , called ZachiK .\nZachiK is a simple password cracking swarm worm that incorporates different learning and information sharing strategies .\nSuch a swarm worm was deployed in both a local area network of thirty - ( 30 ) hosts , as well as simulated in a 10,000 node topology .\nPreliminary results showed that such worms is capable of compromising hosts a rates up to 2 orders of magnitude faster than its generic counterpart while retaining stealth capabilities .\nThis work opens up a new area of interesting problems .\nSome of the most interesting and pressing problems to be consider are as follows : \u2022 Is it possible to apply some of learning concepts developed over the last ten years in the areas of swarm intelligence , agent systems , and distributed control to the design of sophisticated swarm worms in such a way that true emergent behavior takes place ?\n\u2022 Are the current techniques being developed in the design of Intrusion Detection & CounterMeasure Systems and Survivable systems effective against this new class of worms ?\n; and \u2022 What techniques , if any , can be developed to create defenses against swarm worms ?", "lvl-4": "An Initial Analysis and Presentation of Malware Exhibiting Swarm-Like Behavior\nABSTRACT\nThe Slammer , which is currently the fastest computer worm in recorded history , was observed to infect 90 percent of all vulnerable Internets hosts within 10 minutes .\nAlthough the main action that the Slammer worm takes is a relatively unsophisticated replication of itself , it still spreads so quickly that human response was ineffective .\nMost proposed countermeasures strategies are based primarily on rate detection and limiting algorithms .\nHowever , such strategies are being designed and developed to effectively contain worms whose behaviors are similar to that of Slammer .\nIn our work , we put forth the hypothesis that next generation worms will be radically different , and potentially such techniques will prove ineffective .\nSpecifically , we propose to study a new generation of worms called '' Swarm Worms '' , whose behavior is predicated on the concept of '' emergent intelligence '' .\nEmergent Intelligence is the behavior of systems , very much like biological systems such as ants or bees , where simple local interactions of autonomous members , with simple primitive actions , gives rise to complex and intelligent global behavior .\nIn this manuscript we will introduce the basic principles behind the idea of '' Swarm Worms '' , as well as the basic structure required in order to be considered a '' swarm worm '' .\nIn addition , we will present preliminary results on the propagation speeds of one such swarm worm , called the ZachiK worm .\nWe will show that ZachiK is capable of propagating at a rate 2 orders of magnitude faster than similar worms without swarm capabilities .\n1 .\nINTRODUCTION AND PREVIOUS WORK\nIn the early morning hours ( 05:30 GMT ) of January 25 , 2003 the fastest computer worm in recorded history began spreading throughout the Internet .\nSince Slammer , researchers have explored the behaviors of fast spreading worms , and have designed countermeasures strategies based primarily on rate detection and limiting algorithms .\nFor example , Zou , et al. , [ 2 ] , proposed a scheme where a Kalman filter is used to detect the early propagation of a worm .\nThat is , systems are being designed and developed to effectively contain worms whose behaviors are similar to that of Slammer .\nIn the work described here , we put forth the hypothesis that next generation worms will be different , and therefore such techniques may have some significant limitations .\nSpecifically , we propose to study a new generation of worms called '' Swarm Worms '' , whose behavior is predicated on the concept of '' emergent intelligence '' .\nThe concept of emergent intelligence was first studied in association with biological systems .\nIn such studies , early researchers discovered a variety of interesting insect or animal behaviors in the wild .\nA flock of birds sweeps across the sky .\nIn general , this kind of aggregate motion has been called '' swarm behavior . ''\nBiologists , and computer scientists in the field of artificial intelligence have studied such biological swarms , and\nattempted to create models that explain how the elements of a swarm interact , achieve goals , and evolve .\nThe basic concepts that have been developed over the last decade to explain '' swarms , and '' swarm behavior '' include four basic components .\nThese are :\n1 .\nSimplicity of logic & actions : A swarm is composed of N agents whose intelligence is limited .\nAgents in the swarm use simple local rules to govern their actions .\nSome models called this primitive actions or behaviors ; 2 .\nLocal Communication Mechanisms : Agents interact with other members in the swarm via simple '' local '' communication mechanisms .\nFor example , a bird in a flock senses the position of adjacent bird and applies a simple rule of avoidance and follow .\n3 .\n4 . ''\nEmergent Intelligence '' : Aggregate behavior of autonomous agents results in complex '' intelligent '' behaviors ; including self-organization '' .\nIn order to understand fully the behavior of such swarms it is necessary to construct a model that explains the behavior of what we will call generic worms .\nThis model , which extends the work by Weaver [ 5 ] is presented here in section 2 .\nIn addition , we intend to extend said model in such a way that it clearly explains the behaviors of this new class of potentially dangerous worms called Swarm Worms .\nSwarm Worms behave very much like biological swarms and exhibit a high degree of learning , communication , and distributed intelligence .\nSuch Swarm Worms are potentially more harmful than their similar generic counterparts .\nSpecifically , the first instance , to our knowledge , of such a learning worm was created , called ZachiK .\nZachiK is a simple password cracking swarm worm that incorporates different learning and information sharing strategies .\nSuch a swarm worm was deployed in both a local area network of thirty - ( 30 ) hosts , as well as simulated in a 10,000 node topology .\nPreliminary results showed that such worms are capable of compromising hosts at rates up to two orders of magnitude faster than their generic counterpart .\nThe rest of this manuscript is structure as follows .\nIn section 2 an abstract model of both generic worms as well as swarm worms is presented .\nThis model is used in section 2.6 to described the first instance of a swarm worm , ZachiK .\nIn section 4 , preliminary results via both empirical measurements as well as simulation is presented .\nFinally , in section 5 our conclusions and insights into future work are presented .\n5 .\nSUMMARY AND FUTURE WORK\nIn this manuscript , we have presented an abstract model , similar in some aspects to that of Weaver [ 5 ] , that helps explain the generic nature of worms .\nThe model presented in section 2 was extended to incorporate a new class of potentially dangerous worms called Swarm Worms .\nSwarm Worms behave very much like biological swarms and exhibit a high degree of learning , communication , and distributed intelligence .\nSuch Swarm Worms are potentially more harmful than their generic counterparts .\nIn addition , the first instance , to our knowledge , of such a learning worm was created , called ZachiK .\nZachiK is a simple password cracking swarm worm that incorporates different learning and information sharing strategies .\nSuch a swarm worm was deployed in both a local area network of thirty - ( 30 ) hosts , as well as simulated in a 10,000 node topology .\nPreliminary results showed that such worms is capable of compromising hosts a rates up to 2 orders of magnitude faster than its generic counterpart while retaining stealth capabilities .\nThis work opens up a new area of interesting problems .\nSome of the most interesting and pressing problems to be consider are as follows : \u2022 Is it possible to apply some of learning concepts developed over the last ten years in the areas of swarm intelligence , agent systems , and distributed control to the design of sophisticated swarm worms in such a way that true emergent behavior takes place ?\n\u2022 Are the current techniques being developed in the design of Intrusion Detection & CounterMeasure Systems and Survivable systems effective against this new class of worms ?\n; and \u2022 What techniques , if any , can be developed to create defenses against swarm worms ?", "lvl-2": "An Initial Analysis and Presentation of Malware Exhibiting Swarm-Like Behavior\nABSTRACT\nThe Slammer , which is currently the fastest computer worm in recorded history , was observed to infect 90 percent of all vulnerable Internets hosts within 10 minutes .\nAlthough the main action that the Slammer worm takes is a relatively unsophisticated replication of itself , it still spreads so quickly that human response was ineffective .\nMost proposed countermeasures strategies are based primarily on rate detection and limiting algorithms .\nHowever , such strategies are being designed and developed to effectively contain worms whose behaviors are similar to that of Slammer .\nIn our work , we put forth the hypothesis that next generation worms will be radically different , and potentially such techniques will prove ineffective .\nSpecifically , we propose to study a new generation of worms called '' Swarm Worms '' , whose behavior is predicated on the concept of '' emergent intelligence '' .\nEmergent Intelligence is the behavior of systems , very much like biological systems such as ants or bees , where simple local interactions of autonomous members , with simple primitive actions , gives rise to complex and intelligent global behavior .\nIn this manuscript we will introduce the basic principles behind the idea of '' Swarm Worms '' , as well as the basic structure required in order to be considered a '' swarm worm '' .\nIn addition , we will present preliminary results on the propagation speeds of one such swarm worm , called the ZachiK worm .\nWe will show that ZachiK is capable of propagating at a rate 2 orders of magnitude faster than similar worms without swarm capabilities .\n1 .\nINTRODUCTION AND PREVIOUS WORK\nIn the early morning hours ( 05:30 GMT ) of January 25 , 2003 the fastest computer worm in recorded history began spreading throughout the Internet .\nWithin 10 minutes after the first infected host ( patient zero ) , 90 percent of all vulnerable hosts had been compromised creating significant disruption to the global Internet infrastructure .\nVern Paxson of the International Computer Science Institute and Lawrence Berkeley National Laboratory in its analysis of Slammer commented : '' The Slammer worm spread so quickly that human response was ineffective '' , see [ 4 ] The interesting part , from our perspective , about the spread of Slammer is that it was a relatively unsophisticated worm with benign behavior , namely self-reproduction .\nSince Slammer , researchers have explored the behaviors of fast spreading worms , and have designed countermeasures strategies based primarily on rate detection and limiting algorithms .\nFor example , Zou , et al. , [ 2 ] , proposed a scheme where a Kalman filter is used to detect the early propagation of a worm .\nOther researchers have proposed the use of detectors where rates of '' Destination Unreachable '' messages are monitored by firewalls , and a significant increase beyond '' normal '' , alerts the organization to the potential presence of a worm .\nHowever , such strategies suffer from the '' fighting the last War '' syndrome .\nThat is , systems are being designed and developed to effectively contain worms whose behaviors are similar to that of Slammer .\nIn the work described here , we put forth the hypothesis that next generation worms will be different , and therefore such techniques may have some significant limitations .\nSpecifically , we propose to study a new generation of worms called '' Swarm Worms '' , whose behavior is predicated on the concept of '' emergent intelligence '' .\nThe concept of emergent intelligence was first studied in association with biological systems .\nIn such studies , early researchers discovered a variety of interesting insect or animal behaviors in the wild .\nA flock of birds sweeps across the sky .\nA group of ants forages for food .\nA school of fish swims , turns , flees together away from a predator , ands so forth .\nIn general , this kind of aggregate motion has been called '' swarm behavior . ''\nBiologists , and computer scientists in the field of artificial intelligence have studied such biological swarms , and\nattempted to create models that explain how the elements of a swarm interact , achieve goals , and evolve .\nMoreover , in recent years the study of '' swarm intelligence '' has become increasingly important in the fields of robotics , the design of Mobile Ad-Hoc Networks ( MANETS ) , the design of Intrusion Detection Systems , the study of traffic patterns in transportation systems , in military applications , and other areas , see [ 3 ] .\nThe basic concepts that have been developed over the last decade to explain '' swarms , and '' swarm behavior '' include four basic components .\nThese are :\n1 .\nSimplicity of logic & actions : A swarm is composed of N agents whose intelligence is limited .\nAgents in the swarm use simple local rules to govern their actions .\nSome models called this primitive actions or behaviors ; 2 .\nLocal Communication Mechanisms : Agents interact with other members in the swarm via simple '' local '' communication mechanisms .\nFor example , a bird in a flock senses the position of adjacent bird and applies a simple rule of avoidance and follow .\n3 .\nDistributed control : Autonomous agents interact with their environment , which probably consists of other agents , but act relatively independently from all other agents .\nThere is no central command or leader , and certainly there is no global plan .\n4 . ''\nEmergent Intelligence '' : Aggregate behavior of autonomous agents results in complex '' intelligent '' behaviors ; including self-organization '' .\nIn order to understand fully the behavior of such swarms it is necessary to construct a model that explains the behavior of what we will call generic worms .\nThis model , which extends the work by Weaver [ 5 ] is presented here in section 2 .\nIn addition , we intend to extend said model in such a way that it clearly explains the behaviors of this new class of potentially dangerous worms called Swarm Worms .\nSwarm Worms behave very much like biological swarms and exhibit a high degree of learning , communication , and distributed intelligence .\nSuch Swarm Worms are potentially more harmful than their similar generic counterparts .\nSpecifically , the first instance , to our knowledge , of such a learning worm was created , called ZachiK .\nZachiK is a simple password cracking swarm worm that incorporates different learning and information sharing strategies .\nSuch a swarm worm was deployed in both a local area network of thirty - ( 30 ) hosts , as well as simulated in a 10,000 node topology .\nPreliminary results showed that such worms are capable of compromising hosts at rates up to two orders of magnitude faster than their generic counterpart .\nThe rest of this manuscript is structure as follows .\nIn section 2 an abstract model of both generic worms as well as swarm worms is presented .\nThis model is used in section 2.6 to described the first instance of a swarm worm , ZachiK .\nIn section 4 , preliminary results via both empirical measurements as well as simulation is presented .\nFinally , in section 5 our conclusions and insights into future work are presented .\n2 .\nWORM MODELING\nIn order to study the behavior of swarm worms in general , it is necessary to create a model that realistically reflects the structure of worms and it is not necessarily tied to a specific instance .\nIn this section , we described such a model where a general worm is describe as having four - ( 4 ) basic components or subfunctions .\nBy definition , a worm is a selfcontained , self propagating program .\nThus , in simple terms , it has two main functions : that which propagates and that which does '' other '' things .\nWe propose that there is a third broad functionality of a worm , that of self-preservation .\nWe also propose that the '' other '' functionality of a worm may be more appropriately categorized as Goal-Based Actions ( GBA ) , as whatever functionality included in a worm will naturally be dependent on whatever goals ( and subgoals ) the author has .\nThe work presented by Weaver et .\nal. in [ 5 ] provides us with mainly an action and technique based taxonomy of computer worms , which we utilize and further extend here .\n2.1 Propagation\nThe propagation function itself may be broken down into three actions : acquire target , send scan , and infect target .\nAcquiring the target simply means picking a host to attack next .\nSending a scan involves checking to see if that host is receptive to an infection attempt , since IP-space is sparsely populated .\nThis may involve a simple ping to check if the host is alive or a full out vulnerability assessment .\nInfecting the target is the actual method used to send the worm code to the new host .\nIn algorithm form :\nIn the case of a simple worm which does not first check to see if the host is available or susceptible ( such as Slammer ) , the scan method is dropped :\nEach of these actions may have an associated cost to its inclusion and execution , such as increased worm size and CPU or network load .\nDepending on the authors needs or requirements , these become limiting factors in what may be included in the worm 's actions .\nThis is discussed further after expanding upon these actions below .\n2.2 Target Acquisition :\nThe Target Acquisition phase of our worm algorithm is built directly off of the Target Discovery section in [ 5 ] .\nWeaver et .\nal. taxonomize this task into 5 separate categories .\nHere , we further extend their work through parameterization .\nScanning : Scanning may be considered an equation-based method for choosing a host .\nAny type of equation may be used to arrive at an IP address , but there are three main types seen thus far : sequential , random , and local preference .\nSequential scanning is exactly as it sounds : start at an IP address and increment through all the IP space .\nThis could carry with it the options of which IP to start with ( user chosen value , random , or based on IP of infected host ) and\nhow many times to increment ( continuous , chosen value , or subnet-based ) .\nRandom scanning is completely at random ( depending on the chosen PRNG method and its seed value ) .\nLocal preference scanning is a variance of either Sequential or Random , whereby it has a greater probability of choosing a local IP address over a remote one ( for example , the traditional 80/20 split ) .\nPre-generated Target Lists : Pre-generated Target Lists , or so called '' hit-lists '' , could include the options for percentage of total population and percentage wrong , or just number of IPs to include .\nImplicit to this type is the fact that the list is divided among a parent and its children , avoiding the problem of every instance hitting the exact same machines .\nExternally Generated Target Lists : Externally generated target lists depend on one or more external sources that can be queried for host data .\nThis will involve either servers that are normally publicly available , such as gaming meta-servers , or ones explicitly setup by the worm or worm author .\nThe normally available meta-servers could have parameters for rates of change , such as many popping up at night or leaving in the morning .\nEach server could also have a maximum queries/second that it would be able to handle .\nThe worm would also need a way of finding these servers , either hard-coded or through scanning .\nInternal Target Lists : Internal Target Lists are highly dependent on the infected host .\nThis method could parameterize the choice of how much info is on the host , such as '' all machines in subnet '' , '' all windows boxes in subnet '' , particular servers , number of internal/external , or some combination .\nPassive : Passive methods are determined by '' normal '' interactions between hosts .\nParameters may include a rate of interaction with particular machines , internal/external rate of interaction , or subnet-based rate of interaction .\nAny of these methods may also be combined to produce different types of target acquisition strategies .\nFor example , the a worm may begin with an initial hit-list of 100 different hosts or subnets .\nOnce it has exhausted its search using the hit-list , it may then proceed to perform random scanning with a 50 % local bias .\nIt is important to note , however , that the resource consumption of each method is not the same .\nDifferent methods may require the worm to be large , such as the extra bytes required by a hit-list , or to take more processing time , such as by searching the host for addresses of other vulnerable hosts .\nFurther research and analysis should be performed in this area to determine associated costs for using each method .\nThe costs could then be used in determining design tradeoffs that worm authors engage at .\nFor example , hit lists provide a high rate of infection , but at a high cost of worm payload size .\n2.2.1 Sending a Scan\nThe send scan function tests to see if the host is available for infection .\nThis can be as simple as checking if the host is up on the network or as complex as checking if the host is vulnerable to the exploit which will be used .\nThe sending of a scan before attempted infection can increase ` the scanning rate if the cost for failing an infection is greater than the cost of failing a scan or sending a scan plus infection ; and failures are more frequent than successes .\nOne important parameter to this would be the choice of transport protocol ( TCP/UDP ) or just simply the time for one successful scan and time for one failed scan .\nAlso , whether or not it tests for the host to be up or if it is a full test for the vulnerability ( or for multiple vulnerabilities ) .\n2.2.2 Infection Vector ( IV )\nThe particular infection vector used to access the remote host is mainly dependent on the particular vulnerability chosen to exploit .\nIn a non-specific sense , it is dependent on the transport protocol chosen to use and the message size to be sent .\nSection 3 of [ 5 ] also proposes three particular classes of IV : Self-carried , second channel , and embedded .\n2.3 Self Preservation\nThe Self Preservation actions of a worm may take many forms .\nIn the wild , worms have been observed to disable anti-virus software or prevent sending itself to certain antivirusknown addresses .\nThey have also been seen to attempt disabling of other worms which may be contending for the same system .\nWe also believe that a time-based throttled scanning may help the worm to '' slip under the radar '' .\nWe also propose a decoy method , whereby a worm will release a few children that '' cause a lot of noise '' so that the parent is not noticed .\nIt has also been proposed [ 5 ] that a worm cause damage to its host if , and only if , it is '' disturbed '' in some way .\nThis module could contain parameters for : probability of success in disabling anti-virus or other software updates , probability of being noticed and thus removed , or '' hardening '' of the host against other worms .\n2.4 Goal-Based Actions\nA worm 's GBA functionality depends on the author 's goal list .\nThe Payloads section of [ 5 ] provides some useful suggestions for such a module .\nThe opening of a back-door can make the host susceptible to more attacks .\nThis would involve a probability of the back-door being used and any associated traffic utilization .\nIt could also provide a list of other worms this host is now susceptible to or a list of vulnerabilities this host now has .\nSpam relays and HTTP-Proxies of course have an associated bandwidth consumption or traffic pattern .\nInternet DoS attacks would have a set time of activation , a target , and a traffic pattern .\nData damage would have an associated probability that the host dies because of the damage .\nIn Figure 1 , this general model of a worm is summarized .\nPlease note that in this model there is no learning , no , or very little , sharing of information between worm instances , and certainly no coordination of actions .\nIn the next section we expand the model to include such mechanisms , and hence , arrive at the general model of a swarm worm .\n2.5 Swarms - General Model\nAs described in section 1 , the basic characteristics that distinguished swarm behavior from simply what appears to be collective coordinated behaviors are four basic attributes .\nThese are :\n1 .\nSimplicity of logic & actions ; 2 .\nLocal Communication Mechanisms ; 3 .\nDistributed control ; and 4 .\nEmergent Intelligence , including '' self-organization '' .\nFigure 1 : General Worm Model\nIn this work we aggregate all of these attributes under the general title of '' Learning , Communication , and Distributed Control .\nThe presence of these attributes distinguishes swarm worms from otherwise regular worms , or other types of malware such as Zombies .\nIn figure ??\n, the generic model of a worm is expanded to included these set of actions .\nWithin this context then , a worm like Slammer can not be categorized as a swarm worm due to the fact that new instances of the worm do not coordinate their actions or share information .\nOn the other hand , Zombies and many other forms of DDoS , which at first glance may be consider swarm worms are not .\nThis is simply due to fact that in the case of Zombies , control is not distributed but rather centralized , and no emergent behaviors arise .\nThe latter , the potential emergence of intelligence or new behaviors is what makes swarm worms so potentially dangerous .\nFinally , when one considers the majority of recent disruptions to the Internet Infrastructure , and in light of our description of swarm attacks , then said disruptions can be easily categorized as precursors to truly swarm behavior .\nSpecifically ,\n\u2022 DDOS - Large number of compromised hosts send useless packets requiring processing ( Stacheldraht , http : / / www.cert.org / incidentnotes/IN \u2212 99 \u2212 04 .\nhtml ) .\nDDoS attacks are the early precursors to Swarm Attacks due to the large number of agents involved .\n\u2022 Code Red CrV1 , Code Red II , Nimbda - Exhibit early notions of swarm attacks , including a backdoor communication channel .\n\u2022 Staniford & Paxson in '' How to Own the Internet in Your Spare Time ? ''\nexplore modifications to CrV1 , Code Red I , II with a '' swarm '' like type of behavior .\nFor example , they speculate on new worms which employ direct worm-to-worm communication , and employ programmable updates .\nFor example the Warhol worm , and Permutation-Scanning ( self coordinating ) worms .\n2.6 Swarm Worm : the details\nIn considering the creation of what we believed to be the first '' Swarm Worm '' in existence , we wanted to adhere as close as possible to the general model presented in section ??\nwhile at the same time facilitating large scale analysis , both empirical and through simulations , of the behavior of the swarm .\nFor this reason , we selected as the first instance\nFigure 2 : General Model of a Swarm Worm\nof the swarm a simple password cracking worm .\nThe objective of this worm simply is to infect a host by sequentially attempting to login into the host using well known passwords ( dictionary attack ) , passwords that have been discovered previously by any member of the swarm , and random passwords .\nOnce , a host is infected , the worm will create communication channels with both its '' known neighbors '' at that time , as well as with any offsprings that it successfully generates .\nIn this context a successful generation of an offspring means simply infecting a new host and replicating an exact copy of itself in such a host .\nWe call this swarm worm the ZachiK worm in honor of one of its creators .\nAs it can be seen from this description , the ZachiK worm exhibits all of the elements described before .\nIn the following sections , we described in detail each one of the elements of the ZachiK worm .\n2.7 Infection Vector\nThe infection vector used for ZachiK worm is the secure shell protocol SSH .\nA modified client which is capable of receiving passwords from the command line was written , and integrated with a script that supplies it with various passwords : known and random .\nWhen a password is found for an appropriate target , the infection process begins .\nAfter the root password of a host is discovered , the worm infects the target host and replicates itself .\nThe worm creates a new directory in the target host , copies the modified ssh client , the script , the communications servers , and the updated versions of data files ( list of known passwords and a list of current neighbors ) .\nIt then runs the modified script on the newly infected hosts , which spawns the communications server , notifies the neighbors and starts looking for new targets .\nIt could be argued , correctly , that the ZachiK worm can be easily defeated by current countermeasure techniques present on most systems today , such as disallowing direct root logins from the network .\nWithin this context ZachiK can quickly be discarded as very simple and harmless worm that does not require further study .\nHowever , the reader should consider the following : 1 .\nZachiK can be easily modified to include a variety of infection vectors .\nFor example , it could be programmed to guess common user names and their passwords ; gain\naccess to a system , then guess the root password or use other well know vulnerabilities to gain root privileges ;\n2 .\nZachiK is a proof of concept worm .\nThe importance of ZachiK is that it incorporates all of the behaviors of a '' swarm worm '' including , but not restricted to , distributed control , communication amongst agents , and learning ; 3 .\nZachiK is composed of a large collection of agents operating independently which lends itself naturally to parallel algorithms such as a parallel search of the IPV4 address space .\nWithin this context , SLAMMER , does incorporate a parallel search capability of potentially susceptible addresses .\nHowever , unlike ZachiK , the knowledge discovered by the search is never shared amongst the agents .\nFor this reasons , and many others , one should not discard the potential of this new class of worms but rather embrace its study .\n2.8 Self-Preservation\nIn the case of ZachiK worm , the main self-preservation techniques used are simply keeping the payload small .\nIn this context , this simply means restricting the number of passwords that an offspring inherits , masquerading worm messages as common http requests , and restricting the number of neighbors to a maximum of five - ( 5 ) .\n2.9 Propagation\nChoosing the next target ( s ) in an efficient matter requires thought .\nIn the past , known and proposed worms , see [ 5 ] , have applied propagation techniques that varied .\nThese include : strictly random selection of a potential vulnerable host ; target lists of vulnerable hosts ; locally biased random selection ( select a host target at random from a local subnet ) ; and a combination of some or all of the above .\nIn our test and simulation environments , we will apply a combination of locally biased and totally random selection of potential vulnerable hosts .\nHowever , due to the fact that the ZachiK worm is a swarm worm , address discovery ( that is when non-existent addresses are discovered ) information will be shared amongst members of the swarm .\nThe infection and propagation threads do the following set of activities repeatedly :\n\u2022 Choose an address \u2022 Check the validity of the address \u2022 Choose a set of passwords \u2022 Try infecting the selected host with this set of passwords\nAs described earlier , choosing an address makes use of a combination of random selection , local bias , and target lists .\nSpecifically , to choose an address , the instance may either :\n\u2022 Generate a new random address \u2022 Generate an address on the local network \u2022 Pick an address from a handoff list\nThe choice is made randomly among these options , and can be varied to test the dependency of propagation on particular choices .\nPassword are either chosen from the list of known passwords or newly generated .\nWhen an infection of a valid address fails , it is added to a list of handoffs , which is sent to the neighbors to try to work on .\n2.10.1 Communication\nThe concept of a swarm is based on transfer of information amongst neighbors , which relay their new incoming messages to their neighbors , and so on until every worm instance in the swarm is aware of these messages .\nThere are two classes of messages : data or information messages and commands .\nThe command messages are meant for an external user ( a.k.a. , hackers and/or crackers ) to control the actions of the instances , and are currently not implemented .\nThe information messages are currently of three kinds : new member , passwords and exploitable addresses ( '' handoffs '' ) .\nThe new member messages are messages that a new instance sends to the neighbors on its ( short ) list of initial neighbors .\nThe neighbors then register these instances in their neighbor list .\nThese are messages that form the multi-connectivity of the swarm , and without them , the topology will be a treelike structure , where eliminating a single node would cause the instances beneath it to be inaccessible .\nThe passwords messages inform instances of newly discovered passwords , and by informing all instances , the swarm as whole collects this information , which allows it to infect new instances more effectively .\nThe handoffs messages inform instances of valid addresses that could not be compromised ( fail at breaking the password for the root account ) .\nSince the address space is rather sparse , it takes a relatively long time ( i.e. many trials ) to discover a valid address .\nTherefore , by handing off discovered valid addresses , the swarm is ( a ) conserving '' energy '' by not re-discovering the same addresses ( b ) attacking more effectively .\nIn a way , this is a simple instance of coordinated activity of a swarm .\n2.10.2 Coordination\nWhen a worm instance is '' born '' , it relays its existence to all neighbors on its list .\nThe main thread then spawns a few infection threads , and continues to handle incoming messages ( registering neighbors , adding new passwords , receiving addresses and relaying these messages ) .\n2.10.3 Distributed Control\nControl in the ZachiK worm is distributed in the sense that each instance of the worm performs a set of actions independently of every other instance while at the same time benefiting from the learning achieve by its immediate neighbors .\n2.11 Goal Based Actions\nThe first instantiation of the ZachiK worm has two basic goals .\nThese are : ( 1 ) propagate , and ( 2 ) discover and share with members of th swarm new root passwords .\n3 .\nEXPERIMENTAL DESIGN\nIn order to verify our hypothesis that Swarm Worms are more capable , and therefore dangerous than other well known\nworms , a network testbed was created , and a simulator , capable of simulating large scale '' Internet-like '' topologies ( IPV4 space ) , was developed .\nThe network testbed consisted of a local area network of 30 Linux based computers .\nThe simulator was written in C++ .\nThe simple swarm worm described in section 2.6 was used to infect patient-zero , and then the swarm worm was allowed to propagate via its own mechanisms of propagation , distributed control , and swarm behaviors .\nIn the case of a simple local area network of 30 computers , six - ( 6 ) different root passwords out of a password space of 4 digits ( 10000 options ) were selected .\nAt the start of the experiment a single known password is known , that of patient-zero .\nAll shared passwords are distributed randomly across all nodes .\nSimilarly , in the case of the simulation , a network topology of 10,000 hosts , whose addresses were selected randomly across the IPV4 space , was constructed .\nWithin that space , a total of 200 shared passwords were selected and distributed either randomly and/or targeted to specific network topologies subnets .\nFor example , in one of our simulation runs , the network topology consisted of 200 subnets each containing 50 hosts .\nIn such a topology , shared passwords were distributed across subnets where a varying percentage of passwords were shared across subnets .\nThe percentages of shared passwords used was reflective of early empirical studies , where up to 39.7 % of common passwords were found to be shared .\n4 .\nRESULTS\nIn Figure 3 , the results comparing Swarm Attack behavior versus that of a typical Malform Worm for a 30 node LAN are presented .\nIn this set of empirical runs , six - ( 6 ) shared passwords were distributed at random across all nodes from a possible of 10,000 unknown passwords .\nThe data presented reflects the behaviors of a total of three - ( 3 ) distinct classes of worm or swarm worms .\nThe class of worms presented are as follows :\n\u2022 I-NS-NL : = Generic worm , independent ( I ) , no learning/memoryless ( NL ) , and no sharing of information with neighbors or offsprings ( NS ) ; \u2022 S-L-SP : = Swarm Worm ( S ) , learning ( L ) , keeps list of learned passwords , and sharing of passwords ( SP ) across nearest neighbors and offsprings ; and \u2022 S-L-SP & A : = Swarm Worm ( S ) , learning ( L ) , keeps list of learned passwords , and sharing of passwords and existent addresses ( SP&A ) across nearest neighbors and offsprings .\nAs it is shown in Figure 3 , the results validate our original hypothesis that swarm worms are significantly more efficient and dangerous than generic worms .\nIn this set of experiments , the sharing of passwords provides an order of magnitude improvement over a memoryless random worm .\nSimilarly , a swarm worm that shares passwords and addresses is approximately two orders of magnitude more efficient than its generic counterpart .\nIn Figure 3 , a series of discontinuities can be observed .\nThese discontinuities are an artifact of the small sample space used for this experiment .\nBasically , as soon as a password is broken , all nodes sharing that specific password are infected within a few seconds .\nNote that it is trivial for a swarm worm to scan and discovered a small shared password space .\nIn Figure 4 , the simulation results comparing Swarm Attack Behavior versus that of a Generic Malform Worm are presented .\nIn this set of simulation runs , a network topology of 10,000 hosts , whose addresses were selected randomly across the IPV4 space , was constructed .\nWithin that space , a total of 200 shared passwords were selected and distributed either randomly and/or targeted to specific network topologies subnets .\nThe data presented reflects the behaviors of three - ( 3 ) distinct classes of worm or swarm worms and two ( 2 ) different target host selection scanning strategies ( random scanning and local bias ) .\nThe amount of local bias was varied across multiple simulation runs .\nThe results presented are aggregate behaviors .\nIn general the following class of Generic Worms and Swarm Worms were simulated .\nAddress Scanning :\n\u2022 Random : = addresses are selected at random from a subset of the IPV4 space , namely , a 224 address space ; and \u2022 Local Bias : = addresses are selected at random from either a local subnet ( 256 addresses ) or from a subset of the IPV4 space , namely , a 224 address space .\nThe percentage of local bias is varied across multiple runs .\nLearning , Communication & Distributed Control \u2022 I-NL-NS : Generic worm , independent ( I ) , no learning / memoryless ( NL ) , and no sharing of information with neighbors or offsprings ( NS ) ; \u2022 I-L-OOS : Generic worm , independent ( I ) , learning / memoryless ( L ) , and one time sharing of information with offsprings only ( OOS ) ; \u2022 S-L-SP : = Swarm Worm ( S ) , learning ( L ) , keeps list of learned passwords , and sharing of passwords ( SP ) across nearest neighbors and offsprings ; \u2022 S-L-S & AOP : = Swarm Worm ( S ) , learning ( L ) , keeps list of learned passwords , and sharing of addresses with neighbors and offsprings , shares passwords one time only ( at creation ) with offsprings ( SA&OP ) ; \u2022 S-L-SP & A : = Swarm Worm ( S ) , learning ( L ) , keeps list of learned passwords , and sharing of passwords and existent addresses ( SP&A ) across nearest neighbors and offsprings .\nAs it is shown in Figure 4 , the results are consistent with our set of empirical results .\nIn addition , the following observations can be made .\n1 .\nLocal preference is incredibly effective .\n2 .\nShort address handoffs are more effective than long ones .\nWe varied the size of the list allowed in the sharing of addresses ; the overhead associated with a long address list is detrimental to the performance of the swarm worm , as well as to its stealthiness ; 3 .\nFor the local bias case , sharing valid addresses of susceptible host , S-L-S & AOP worm ( recall , the S-L-S & AOP swarm shares passwords , one time only , with offsprings\nat creation time ) is more effective than sharing passwords in the case of the S-L-SP swarm .\nIn this case , we can think of the swarm as launching a distributeddictionary attack : different segments of the swarm use different passwords to try to break into susceptible uninfected host .\nIn the local bias mode , early in the life of the swarm , address-sharing is more effective than password-sharing , until most subnets are discovered .\nThen the targeting of local addresses assists in discovering the susceptible hosts , while the swarm members need to waste time rediscovering passwords ; and 4 .\nInfecting the last 0.5 % of nodes takes a very long time in non-local bias mode .\nBasically , the shared password list across subnets has been exhausted , and the swarm reverts to simply a random discovery of password algorithm .\nFigure 3 : Swarm Attack Behavior vs. Malform Worm : Empirical Results , 30node LAN Figure 4 : Swarm Attack Behavior vs. Malform Worm : Simulation Results\n5 .\nSUMMARY AND FUTURE WORK\nIn this manuscript , we have presented an abstract model , similar in some aspects to that of Weaver [ 5 ] , that helps explain the generic nature of worms .\nThe model presented in section 2 was extended to incorporate a new class of potentially dangerous worms called Swarm Worms .\nSwarm Worms behave very much like biological swarms and exhibit a high degree of learning , communication , and distributed intelligence .\nSuch Swarm Worms are potentially more harmful than their generic counterparts .\nIn addition , the first instance , to our knowledge , of such a learning worm was created , called ZachiK .\nZachiK is a simple password cracking swarm worm that incorporates different learning and information sharing strategies .\nSuch a swarm worm was deployed in both a local area network of thirty - ( 30 ) hosts , as well as simulated in a 10,000 node topology .\nPreliminary results showed that such worms is capable of compromising hosts a rates up to 2 orders of magnitude faster than its generic counterpart while retaining stealth capabilities .\nThis work opens up a new area of interesting problems .\nSome of the most interesting and pressing problems to be consider are as follows : \u2022 Is it possible to apply some of learning concepts developed over the last ten years in the areas of swarm intelligence , agent systems , and distributed control to the design of sophisticated swarm worms in such a way that true emergent behavior takes place ?\n\u2022 Are the current techniques being developed in the design of Intrusion Detection & CounterMeasure Systems and Survivable systems effective against this new class of worms ?\n; and \u2022 What techniques , if any , can be developed to create defenses against swarm worms ?"} {"id": "C-19", "title": "", "abstract": "", "keyphrases": ["protocol framework", "distribut algorithm", "distribut system", "servic interfac", "network", "commun", "event-base framework", "stack", "modul", "request", "repli", "modular", "dynam protocol replac"], "prmu": [], "lvl-1": "Service Interface: A New Abstraction for Implementing and Composing Protocols\u2217 Olivier R\u00a8utti Pawe\u0142 T. Wojciechowski Andr\u00b4e Schiper Ecole Polytechnique F\u00b4ed\u00b4erale de Lausanne (EPFL) 1015 Lausanne, Switzerland {Olivier.Rutti, Pawel.Wojciechowski, Andre.Schiper}@epfl.\nch ABSTRACT In this paper we compare two approaches to the design of protocol frameworks - tools for implementing modular network protocols.\nThe most common approach uses events as the main abstraction for a local interaction between protocol modules.\nWe argue that an alternative approach, that is based on service abstraction, is more suitable for expressing modular protocols.\nIt also facilitates advanced features in the design of protocols, such as dynamic update of distributed protocols.\nWe then describe an experimental implementation of a service-based protocol framework in Java.\nCategories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Applications 1.\nINTRODUCTION Protocol frameworks, such Cactus [5, 2], Appia [1, 16], Ensemble [12, 17], Eva [3], SDL [8] and Neko[6, 20], are programming tools for developing modular network protocols.\nThey allow complex protocols to be implemented by decomposing them into several modules cooperating together.\nThis approach facilitates code reuse and customization of distributed protocols in order to fit the needs of different applications.\nMoreover, protocol modules can be plugged in to the system dynamically.\nAll these features of protocol frameworks make them an interesting enabling technology for implementing adaptable systems [14] - an important class of applications.\nMost protocol frameworks are based on events (all frameworks cited above are based on this abstraction).\nEvents are used for asynchronous communication between different modules on the same machine.\nHowever, the use of events raises some problems [4, 13].\nFor instance, the composition of modules may require connectors to route events, which introduces burden for a protocol composer [4].\nProtocol frameworks such as Appia and Eva extend the event-based approach with channels.\nHowever, in our opinion, this solution is not satisfactory since composition of complex protocol stacks becomes more difficult.\nIn this paper, we propose a new approach for building modular protocols, that is based on a service abstraction.\nWe compare this new approach with the common, event-based approach.\nWe show that protocol frameworks based on services have several advantages, e.g. allow for a fairly straightforward protocol composition, clear implementation, and better support of dynamic replacement of distributed protocols.\nTo validate our claims, we have implemented SAMOA - an experimental protocol framework that is purely based on the service-based approach to module composition and implementation.\nThe framework allowed us to compare the service- and event-based implementations of an adaptive group communication middleware.\nThe paper is organized as follows.\nSection 2 defines general notions.\nSection 3 presents the main characteristics of event-based frameworks, and features that are distinct for each framework.\nSection 4 describes our new approach, which is based on service abstraction.\nSection 5 discusses the advantages of a service-based protocol framework compared to an event-based protocol framework.\nThe description of our experimental implementation is presented in Section 6.\nFinally, we conclude in Section 7.\n2.\nPROTOCOL FRAMEWORKS In this section, we describe notions that are common to all protocol frameworks.\nProtocols and Protocol Modules.\nA protocol is a distributed algorithm that solves a specific problem in a distributed system, e.g. a TCP protocol solves the reliable channel problem.\nA protocol is implemented as a set of identical protocol modules located on different machines.\nProtocol Stacks.\nA stack is a set of protocol modules (of different protocols) that are located on the same machine.\nNote that, despite its name, a stack is not strictly layered, 691 i.e. a protocol module can interact with all other protocol modules in the same stack, not only with the protocol modules directly above and below.\nIn the remainder of this paper, we use the terms machine and stack interchangeably.\nStack 1 S1 Q1 R1 P1 Network Figure 1: Example of a protocol stack In Figure 1, we show an example protocol stack.\nWe represent protocol modules by capital letters indexed with a natural number, e.g. P1, Q1, R1 and S1.\nWe write Pi to denote the protocol module of a protocol P in stack i.\nWe use this notation throughout the paper.\nModules are represented as white boxes.\nArrows show module interactions.\nFor instance, protocol module P1 interacts with the protocol module Q1 and conversely (See Fig. 1).\nProtocol Module Interactions.\nBelow, we define the different kinds of interaction between protocol modules.\n\u2022 Requests are issued by protocol modules.\nA request by a protocol module Pi is an asynchronous call by Pi of another protocol module.\n\u2022 Replies are the results of a request.\nA single request can generate several replies.\nOnly protocol modules belonging to the same protocol as the module that has issued the request are concerned by the corresponding replies.\nFor example, a request by Pi generates replies that concern only protocol modules Pj.\n\u2022 Notifications can be used by a protocol module to inform (possibly many) protocol modules in the same stack about the occurrence of a specific event.\nNotifications may also be the results of a request.\n3.\nEVENT-BASED PROTOCOL FRAMEWORK DESIGN Most existing protocol frameworks are event-based.\nExamples are Cactus [5, 2], Appia [1, 16] and Ensemble [12, 17].\nIn this section, we define the notion of an event in protocol frameworks.\nWe also explain how protocol modules are structured in event-based frameworks.\nEvents.\nAn event is a special object for indirect communication between protocol modules in the same stack.\nEvents may transport some information, e.g. a network message or some other data.\nWith events, the communication is indirect, i.e. a protocol module that triggers an event is not aware of the module(s) that handle the event.\nEvents enable one-to-many communication within a protocol stack.\nTriggering an event can be done either synchronously or asynchronously.\nIn the former case, the thread that triggers an event e is blocked until all protocol modules that handle e have terminated handling of event e.\nIn the latter case, the thread that triggers the event is not blocked.\nProtocol Modules.\nIn event-based protocol frameworks, a protocol module consists of a set of handlers.\nEach handler is dedicated to handling of a specific event.\nHandlers of the same protocol module may share data.\nHandlers can be dynamically bound to events.\nHandlers can also be unbound dynamically.\nUpon triggering some event e, all handlers bound to e are executed.\nIf no handler is bound, the behavior is usually unspecified.\nStack 1 P1 Q1 R1 S1 Network f e gg deliver send h Figure 2: Example of an event-based protocol stack In Figure 2, we show an example of an event-based stack.\nEvents are represented by small letters, e.g. e, f, ... The fact that a protocol module can trigger an event is represented by an arrow starting from the module.\nA white trapezoid inside a module box represents a handler defined by the protocol module.\nTo mark that some handler is bound to event e, we use an arrow pointing to the handler (the label on the arrow represents the event e).\nFor example, the protocol module P1 triggers event e and handles event f (see Fig. 2).\nNote that the network is represented as a special protocol module that handles the send event (to send a message to another machine) and triggers the deliver event (upon receipt of a message from another machine).\nSpecific Features.\nSome protocol frameworks have unique features.\nBelow, we present the features that influence composition and implementation of protocol modules.\nIn Cactus [5, 2], the programmer can give a priority number to a handler upon binding it to an event.\nWhen an event is triggered, all handlers are executed following the order of priority.\nA handler h is also able to cancel the execution of an event trigger: all handlers that should be executed after h according to the priority are not executed.\nAppia [1, 16] and Eva [3] introduce the notion of channels.\nChannels allow to build routes of events in protocol stacks.\nEach protocol module has to subscribe to one or many channels.\nAll events are triggered by specifying a channel they belong to.\nWhen a protocol module triggers an event e specifying channel c, all handlers bound to e that are part of a protocol that subscribes to c are executed (in the order prescribed by the definition of channel c).\n4.\nSERVICE-BASED PROTOCOL FRAMEWORK In this section, we describe our new approach for implementing and composing protocols that is based on services.\n692 We show in Section 5 the advantages of service-based protocol frameworks over event-based protocol frameworks.\nService Interface.\nIn our service-based framework, protocol modules in the same stack communicate through objects called service interfaces.\nRequests, replies and notifications are all issued to service interfaces.\nProtocol Modules.\nA protocol module is a set of executers, listeners and interceptors.\nExecuters handle requests.\nAn executer can be dynamically bound to a service interface.\nIt can be later unbound.\nA request issued to a service interface si leads to the execution of the executer bound to si.\nIf no executer is bound to si, the request is delayed until some executer is bound to si.\nContrary to events, at most one executer at any time can be bound to a service interface on every machine.\nListeners handle replies and notifications.\nA listener can be dynamically bound and unbound to/from a service interface si.\nA notification issued to a service interface si is handled by all listeners bound to si in the local stack.\nA reply issued to a service interface is handled by one single listener.\nTo ensure that one single listener handles a reply, a module Pi has to identify, each time it issues a request, the listener to handle the possible reply.\nIf the request and the reply occur respectively, in stack i and in stack j, the service interface si on i communicates to the service interface si on j the listener that must handle the reply.\nIf the listener that must handle the reply does not exist, the reply is delayed until the listener is created.\nStack 1 P1 Q1 R1 S1 Network t u nt Figure 3: Example of a service-based protocol stack In Figure 3, we show an example of a service-based stack.\nWe denote a service interface by a small letter (e.g. t, u and nt) in a hexagonal box.\nThe fact that a module Pi can generate a request to a service interface si is represented by a dashed black arrow going from Pi to si.\nSimilarly, a dashed white arrow going from module Pi to service interface si represents the fact that Pi can generate a reply or a notification to si.\nWe represent executers with white boxes inside protocol modules and listeners with white boxes with a gray border.\nA connecting line between a service interface si and an executer e (resp.\na listener l) shows that e (resp.\nl) is bound to si.\nIn Figure 3, module Q1 contains an executer bound to service interface t and a listener bound to service interface u. Module Q1 can generate replies and notifications to service interface t and requests to service interface u. Note that the service interface nt allows to access the network.\nP1 Q1 P1 Q1 T1T1 t t t Figure 4: Execution of protocol interactions with interceptors An interceptor plays a special r\u02c6ole.\nSimilarly to executers, interceptors can be dynamically bound or unbound to a service interface.\nThey are activated each time a request, a reply or a notification is issued to the service interface they are bound to.\nThis is illustrated in Figure 4.\nIn the right part of the figure, the interceptor of the protocol module T1 is represented by a rounded box.\nThe interceptor is bound to service interface t.\nThe left part of the figure shows that an interceptor can be seen as an executer plus a listener.\nWhen P1 issues a request req to the service interface t, the executer-interceptor of T1 is executed.\nThen, module T1 may forward a request req to the service interface t, where we can have req = req 1 .\nWhen module Q1 issues a reply or a notification, a similar mechanism is used, except that this time the listener-interceptor of T1 is executed.\nNote that a protocol module Ti, that has an interceptor bound to a service interface, is able to modify requests, replies and notifications.\nUpon requests, if several interceptors are bound to the same service interface, they are executed in the order of binding.\nUpon replies and notifications, the order is reversed.\n5.\nADVANTAGES OF SERVICE-BASED PROTOCOL FRAMEWORK DESIGN We show in this section the advantages of service-based protocol frameworks over event-based protocol frameworks.\nWe structure our discussion in three parts.\nFirstly, we present how protocol interactions are modeled in each of the protocol frameworks.\nThen, we discuss the composition of protocol modules in each of these frameworks.\nFinally, we present the problem of dynamic protocol replacement and the advantages of service interfaces in order to implement it.\nThe discussion is summarized in Table 1.\n5.1 Protocol Module Interactions A natural model of protocol interactions (as presented in Section 2) facilitates the implementation of protocol modules.\nFor each protocol interaction, we show how it is modeled in both frameworks.\nWe also explain that an inadequate model may lead to problems.\nRequests.\nIn service-based frameworks, a request is generated to a service interface.\nEach request is handled by at most one executer, since we allow only one executer to be bound to a service interface at any time.\nOn the other hand, in event-based frameworks, a protocol module emulates a request by triggering an event.\nThere is no guarantee 1 The two service interfaces t in the left part of Figure 4 represent the same service interface t.\nThe duplication is only to make the figure readable.\n693 that this event is bound to only one handler, which may lead to programming errors.\nReplies.\nWhen a protocol module generates a reply in a service-based framework, only the correct listener (identified at the time the corresponding request was issued) is executed.\nThis ensures that a request issued by some protocol module Qi, leads to replies handled by protocol modules Qj (i.e. protocol modules of the same protocol).\nThis is not the case in event-based frameworks, as we now show.\nConsider protocol module Q1 in Figure 2 that triggers event g to emulate a request.\nModule S1 handles the request.\nWhen modules Si triggers event h to emulate a reply (remember that a reply can occur in many stacks), both modules Qi and Ri will handle the reply (they both contain a handler bound to h).\nThis behavior is not correct: only protocol modules Qi should handle the reply.\nMoreover, as modules Ri are not necessarily implemented to interact with modules Qi, this behavior may lead to errors.\nSolutions to solve this problem exist.\nHowever, they introduce an unnecessary burden on the protocol programmers and the stack composer.\nFor instance, channels allow to route events to ensure that modules handle only events concerning them.\nHowever, the protocol programmer must take channels into account when implementing protocols.\nMoreover, the composition of complex stacks becomes more difficult due to the fact that the composer has to create many channels to ensure that modules handle events correctly.\nAn addition of special protocol modules (named connectors) for routing events is also not satisfactory, since it requires additional work from the composer and introduces overhead.\nNotifications.\nContrary to requests and replies, notifications are well modeled in event-based frameworks.\nThe reason is that notifications correspond to the one-to-many communication scheme provided by events.\nIn service-based frameworks, notifications are also well modeled.\nWhen a module generates a notification to a service interface si, all listeners bound to s are executed.\nNote that in this case, service interfaces provide the same pattern of communication as events.\n5.2 Protocol Module Composition Replies (and sometimes notifications) are the results of a request.\nThus, there is a semantic link between them.\nThe composer of protocol modules must preserve this link in order to compose correct stacks.\nWe explain now that service based frameworks provide a mechanism to preserve this link, while in event-based frameworks, the lack of such mechanism leads to error-prone composition.\nIn service-based frameworks, requests, replies and notifications are issued to a service interface.\nThus, a service interface introduces a link between these interactions.\nTo compose a correct stack, the composer has to bound a listener to service interface si for each module that issues a request to si.\nThe same must be done for one executer that is part of a module that issues replies or notifications.\nApplying this simple methodology ensures that every request issued to a service interface si eventually results in several replies or notifications issued to the same service interface si.\nIn event-based frameworks, all protocol interactions are issued through different events: there is no explicit link between an event triggered upon requests and an event triggered upon the corresponding replies.\nThus, the composer of a protocol stack must know the meaning of each event in order to preserve the semantic link between replies (and notifications) and requests.\nMoreover, nothing prevents from binding a handler that should handle a request to an event used to issue a reply.\nNote that these problems can be partially solved by typing events and handlers.\nHowever, it does not prevent from errors if there are several instances of the same event type.\nNote that protocol composition is clearer in the protocol frameworks that are based on services, rather than on events.\nThe reason is that several events that are used to model different protocol interactions can be modeled by a single service interface.\n5.3 Dynamic Replacement of Protocols Dynamic replacement of protocols consists in switching on-the-fly between protocols that solve the same problem.\nReplacement of a protocol P by a new protocol newP means that a protocol module Pi is replaced by newPi in every stack i.\nThis replacement is problematic since the local replacements (within stacks) must be synchronized in order to guarantee protocol correctness [21, 18].\nQ1 Q1 R1 P1 1P 1newP 1 Repl\u2212P1 Repl\u2212P1 R newP1 gg h h'' g'' t Figure 5: Dynamic replacement of protocol P For the synchronization algorithms to work, module interactions are intercepted in order to detect a time when Pi should be replaced by newPi.\n(Other solutions, e.g. in [11], are more complex.)\nIn Fig. 5, we show how this interception can be implemented in protocol frameworks that are based on services (in the left part of the figure) and events (in the right part of the figure).\nThe two-sided arrows point to the protocol modules P1 and newP1 that are switched.\nIt can be seen that the approach that uses the Service Interface mechanism has advantages.\nThe intercepting module Repl-P1 has an interceptor bound to service interface t that intercepts every request handled by modules P1 and all replies and notifications issued by P1.\nThe code of the module P1 can therefore remain unchanged.\nIn event-based frameworks, the solution is to add an intermediate module Repl-P1 that intercepts the requests issued to P1 and also the replies and notifications issued by P1.\nAlthough this ad-hoc solution may seem similar to the servicebased approach, there is an important difference.\nThe eventbased solution requires to slightly modify the module P1 since instead of handling event g and triggering event h, P1 must now handle different events g'' and h'' (see Fig. 5).\n6.\nIMPLEMENTATION We have implemented an experimental service-based protocol framework (called SAMOA) [7].\nOur implementation is light-weight: it consists of approximately 1200 lines of code in Java 1.5 (with generics).\nIn this section, we describe the main two classes of our implementation: Service (encoding the Service Interface) and 694 service-based event-based Protocol Interaction an adequate an inadequate representation representation Protocol Composition clear and safe complex and error-prone Dynamic Replacement an integrated ad-hoc solutions mechanism Table 1: Service-based vs. event-based Protocol (encoding protocol modules).\nFinally, we present an example protocol stack that we have implemented to validate the service-based approach.\nThe Service Class.\nA Service object is characterized by the arguments of requests and the arguments of responses.\nA response is either a reply or a notification.\nA special argument, called message, determines the kind of interactions modeled by the response.\nA message represents a piece of information sent over the network.\nWhen a protocol module issues a request, it can give a message as an argument.\nThe message can specify the listener that must handle the reply.\nWhen a protocol module issues a response to a service interface, a reply is issued if one of the arguments of the response is a message specifying a listener.\nOtherwise, a notification is issued.\nExecuters, listeners and interceptors are encoded as innerclasses of the Service class.\nThis allows to provide type-safe protocol interactions.\nFor instance, executers can only be bound to the Service object, they belong to.\nThus, the parameters passed to requests (that are verified statically) always correspond to the parameters accepted by the corresponding executers.\nThe type of a Service object is determined by the type of the arguments of requests and responses.\nA Service object t is compatible with another Service object s if the type of the arguments of requests (and responses) of t is a subtype of the arguments of requests (and responses) of s.\nIn practice, if a protocol module Pi can issue a request to a protocol UDP, then it may also issue a request to TCP (compatible with UDP) due to the subtyping relation on parameters of communicating modules.\nThe Protocol Class.\nA Protocol object consists of three sets of components, one set for each component type (a listener, an executer, and an interceptor).\nProtocol objects are characterized by names to retrieve them easily.\nMoreover, we have added some features to bind and unbind all executers or interceptors to/from the corresponding Service objects.\nProtocol objects can be loaded to a stack dynamically.\nAll these features made it easy to implement dynamic replacement of network protocols.\nProtocol Stack Implementation.\nTo validate our ideas, we have developed an Adaptive Group Communication (AGC) middleware, adopting both the service- and the event-based approaches.\nFig. 6 shows the corresponding stacks of the AGC middleware.\nBoth stacks allow the Consensus and Atomic Broadcast protocols to be dynamically updated.\nThe architecture of our middleware, shown in Fig. 6, builds on the group communication stack described in [15].\nThe UDP and RP2P modules provide respectively, unreliable and reliable point-to-point transport.\nThe FD module implements a failure detector; we assume that it ensures the Stack 1 UDP1RP2P1 Repl CT1 1ABc.\nRepl CT1 ABc.1 Network FD1 GM1 rp2p nt udp d f abcast consensus Stack 1 Repl CT1 1ABc.\nRepl ABc.1 UDP1 FD1 RP2P1 CT1 Network 1GM send deliver Figure 6: Adaptive Group Communication Middleware: service-based (left) vs. event-based (right) properties of the 3S failure detector [9].\nThe CT module provides a distributed consensus service using the ChandraToueg algorithm [10].\nThe ABc.\nmodule implements atomic broadcast - a group communication primitive that delivers messages to all processes in the same order.\nThe GM module provides a group membership service that maintains consistent membership data among group members (see [19] for details).\nThe Repl ABc.\nand the Repl CT modules implement the replacement algorithms [18] for, respectively, the ABc.\nand the CT protocol modules.\nNote that each arrow in the event-based architecture represents an event.\nWe do not name events in the figure for readability.\nThe left stack in Figure 6 shows the implementation of AGC with our service-based framework.\nThe right stack shows the same implementation with an event-based framework.\nPerformance Evaluation.\nTo evaluate the overhead of service interfaces, we compared performance of the serviceand event-based implementations of the AGC middleware.\nThe latter implementation of AGC uses the Cactus protocol framework [5, 2].\nIn our experiment, we compared the average latency of Atomic Broadcast (ABcast), which is defined as follows.\nConsider a message m sent using ABcast.\nWe denote by ti(m) the time between the moment of sending m and the moment of delivering m on a machine (stack) i.\nWe define the average latency of m as the average of ti(m) for all machines (stacks) i within a group of stacks.\nPerformance tests have been made using a cluster of PCs running Red Hat Linux 7.2, where each PC has a Pentium III 766 MHz processor and 128MB of RAM.\nAll PCs are interconnected by a 100 Base-TX duplex Ethernet hub.\nOur experiment has involved 7 machines (stacks) that ABcast messages of 4Mb under a constant load, where a load is a number of messages per second.\nIn Figure 7, we show the results of our experiment for different loads.\nLatencies are shown on the vertical axis, while message loads are shown on the horizontal axis.\nThe solid line shows the results obtained with our service-based framework.\nThe dashed line shows the results obtained with the Cactus framework.\nThe 695 0 500 1000 1500 2000 10 20 30 40 50 60 70 80 90 100 Averagelatency[ms] Load [msg/s] Service-Based Framework Cactus Figure 7: Comparison between our service-based framework and Cactus overhead of the service-based framework is approximately 10%.\nThis can be explained as follows.\nFirstly, the servicebased framework provides a higher level abstraction, which has a small cost.\nSecondly, the AGC middleware was initially implemented and optimized for the event-based Cactus framework.\nHowever, it is possible to optimize the AGC middleware for the service-based framework.\n7.\nCONCLUSION In the paper, we proposed a new approach to the protocol composition that is based on the notion of Service Interface, instead of events.\nWe believe that the service-based framework has several advantages over event-based frameworks.\nIt allows us to: (1) model accurately protocol interactions, (2) reduce the risk of errors during the composition phase, and (3) simply implement dynamic protocol updates.\nA prototype implementation allowed us to validate our ideas.\n8.\nREFERENCES [1] The Appia project.\nDocumentation available electronically at http://appia.di.fc.ul.pt/.\n[2] Nina T. Bhatti, Matti A. Hiltunen, Richard D. Schlichting, and Wanda Chiu.\nCoyote: a system for constructing fine-grain configurable communication services.\nACM Transactions on Computer Systems, 16(4):321-366, November 1998.\n[3] Francisco Vilar Brasileiro, Fab\u00b4\u0131ola Greve, Frederic Tronel, Michel Hurfin, and Jean-Pierre Le Narzul.\nEva: An event-based framework for developing specialized communication protocols.\nIn Proceedings of the 1st IEEE International Symposium on Network Computing and Applications (NCA ``01), 2001.\n[4] Daniel C. B\u00a8unzli, Sergio Mena, and Uwe Nestmann.\nProtocol composition frameworks.\nA header-driven model.\nIn Proceedings of the 4th IEEE International Symposium on Network Computing and Applications (NCA ``05), July 2005.\n[5] The Cactus project.\nDocumentation available electronically at http://www.cs.arizona.edu/ cactus/.\n[6] The Neko project.\nDocumentation available electronically at http://lsrwww.epfl.ch/neko/.\n[7] The SAMOA project.\nDocumentation available electronically at http://lsrwww.epfl.ch/samoa/.\n[8] The SDL project.\nDocumentation available electronically at http://www.sdl-forum.org/SDL/.\n[9] Tushar Deepak Chandra, Vassos Hadzilacos, and Sam Toueg.\nThe weakest failure detector for solving consensus.\nJournal of the ACM, 43(4):685-722, 1996.\n[10] Tushar Deepak Chandra and Sam Toueg.\nUnreliable failure detectors for reliable distributed systems.\nJournal of the ACM, 43(2):225-267, 1996.\n[11] Wen-Ke Chen, Matti A. Hiltunen, and Richard D. Schlichting.\nConstructing adaptive software in distributed systems.\nIn Proceedings of the 21st IEEE International Conference on Distributed Computing System (ICDCS ``01), April 2001.\n[12] The Ensemble project.\nDocumentation available electronically at http://www.cs.cornell.edu/Info/ Projects/Ensemble/.\n[13] Richard Ekwall, Sergio Mena, Stefan Pleisch, and Andr\u00b4e Schiper.\nTowards flexible finite-state-machine-based protocol composition.\nIn Proceedings of the 3rd IEEE International Symposium on Network Computing and Applications (NCA ``04), August 2004.\n[14] Philip K. McKinley, Seyed Masoud Sadjadi, Eric P. Kasten, and Betty H.C. Cheng.\nComposing adaptive software.\nIEEE Computer, 37(7):56-64, 2004.\n[15] Sergio Mena, Andr\u00b4e Schiper, and Pawel T. Wojciechowski.\nA step towards a new generation of group communication systems.\nIn Proceedings of the 4th ACM/IFIP/USENIX International Middleware Conference (Middleware ``03), LNCS 2672, June 2003.\n[16] Hugo Miranda, Alexandre Pinto, and Lu\u00b4\u0131s Rodrigues.\nAppia, a flexible protocol kernel supporting multiple coordinated channels.\nIn Proceedings of the 21st IEEE International Conference on Distributed Computing Systems (ICDCS ``01), April 2001.\n[17] Ohad Rodeh, Kenneth P. Birman, Mark Hayden, Zhen Xiao, and Danny Dolev.\nThe architecture and performance of security protocols in the Ensemble group communication system.\nTechnical Report TR-98-1703, Computer Science Department, Cornell University, September 1998.\n[18] Olivier R\u00a8utti, Pawel T. Wojciechowski, and Andr\u00b4e Schiper.\nDynamic update of distributed agreement protocols.\nTR IC-2005-12, School of Computer and Communication Sciences, Ecole Polytechnique F\u00b4ed\u00b4erale de Lausanne (EPFL), March 2005.\n[19] Andr\u00b4e Schiper.\nDynamic Group Communication.\nTechnical Report IC-2003-27, School of Computer and Communication Sciences, Ecole Polytechnique F\u00b4ed\u00b4erale de Lausanne (EPFL), April 2003.\nTo appear in ACM Distributed Computing.\n[20] P\u00b4eter Urb\u00b4an, Xavier D\u00b4efago, and Andr\u00b4e Schiper.\nNeko: A single environment to simulate and prototype distributed algorithms.\nIn Proceedings of the 15th International Conference on Information Networking (ICOIN ``01), February 2001.\n[21] Pawel T. Wojciechowski and Olivier R\u00a8utti.\nOn correctness of dynamic protocol update.\nIn Proceedings of the 7th IFIP Conference on Formal Methods for Open Object-Based Distributed Systems (FMOODS ``05), LNCS 3535.\nSpringer, June 2005.\n696", "lvl-3": "Service Interface : A New Abstraction for Implementing and Composing Protocols *\nABSTRACT\nIn this paper we compare two approaches to the design of protocol frameworks -- tools for implementing modular network protocols .\nThe most common approach uses events as the main abstraction for a local interaction between protocol modules .\nWe argue that an alternative approach , that is based on service abstraction , is more suitable for expressing modular protocols .\nIt also facilitates advanced features in the design of protocols , such as dynamic update of distributed protocols .\nWe then describe an experimental implementation of a service-based protocol framework in Java .\n1 .\nINTRODUCTION\nProtocol frameworks , such Cactus [ 5 , 2 ] , Appia [ 1 , 16 ] , Ensemble [ 12 , 17 ] , Eva [ 3 ] , SDL [ 8 ] and Neko [ 6 , 20 ] , are programming tools for developing modular network protocols .\nThey allow complex protocols to be implemented by decomposing them into several modules cooperating together .\nThis approach facilitates code reuse and customization of distributed protocols in order to fit the needs of different applications .\nMoreover , protocol modules can be plugged in to the system dynamically .\nAll these features of protocol frameworks make them an interesting enabling technology for implementing adaptable systems [ 14 ] - an important class of applications .\n* Research supported by the Swiss National Science Foundation under grant number 21-67715 .02 and Hasler Stiftung under grant number DICS-1825 .\nMost protocol frameworks are based on events ( all frameworks cited above are based on this abstraction ) .\nEvents are used for asynchronous communication between different modules on the same machine .\nHowever , the use of events raises some problems [ 4 , 13 ] .\nFor instance , the composition of modules may require connectors to route events , which introduces burden for a protocol composer [ 4 ] .\nProtocol frameworks such as Appia and Eva extend the event-based approach with channels .\nHowever , in our opinion , this solution is not satisfactory since composition of complex protocol stacks becomes more difficult .\nIn this paper , we propose a new approach for building modular protocols , that is based on a service abstraction .\nWe compare this new approach with the common , event-based approach .\nWe show that protocol frameworks based on services have several advantages , e.g. allow for a fairly straightforward protocol composition , clear implementation , and better support of dynamic replacement of distributed protocols .\nTo validate our claims , we have implemented SAMOA -- an experimental protocol framework that is purely based on the service-based approach to module composition and implementation .\nThe framework allowed us to compare the service - and event-based implementations of an adaptive group communication middleware .\nThe paper is organized as follows .\nSection 2 defines general notions .\nSection 3 presents the main characteristics of event-based frameworks , and features that are distinct for each framework .\nSection 4 describes our new approach , which is based on service abstraction .\nSection 5 discusses the advantages of a service-based protocol framework compared to an event-based protocol framework .\nThe description of our experimental implementation is presented in Section 6 .\nFinally , we conclude in Section 7 .\n2 .\nPROTOCOL FRAMEWORKS\n3 .\nEVENT-BASED PROTOCOL FRAMEWORK DESIGN\n4 .\nSERVICE-BASED PROTOCOL FRAMEWORK\n5 .\nADVANTAGES OF SERVICE-BASED PROTOCOL FRAMEWORK DESIGN\n5.1 Protocol Module Interactions\n5.2 Protocol Module Composition\n5.3 Dynamic Replacement of Protocols\n6 .\nIMPLEMENTATION\n7 .\nCONCLUSION\nIn the paper , we proposed a new approach to the protocol composition that is based on the notion of Service Interface , instead of events .\nWe believe that the service-based framework has several advantages over event-based frameworks .\nIt allows us to : ( 1 ) model accurately protocol interactions , ( 2 ) reduce the risk of errors during the composition phase , and ( 3 ) simply implement dynamic protocol updates .\nA prototype implementation allowed us to validate our ideas .", "lvl-4": "Service Interface : A New Abstraction for Implementing and Composing Protocols *\nABSTRACT\nIn this paper we compare two approaches to the design of protocol frameworks -- tools for implementing modular network protocols .\nThe most common approach uses events as the main abstraction for a local interaction between protocol modules .\nWe argue that an alternative approach , that is based on service abstraction , is more suitable for expressing modular protocols .\nIt also facilitates advanced features in the design of protocols , such as dynamic update of distributed protocols .\nWe then describe an experimental implementation of a service-based protocol framework in Java .\n1 .\nINTRODUCTION\nThey allow complex protocols to be implemented by decomposing them into several modules cooperating together .\nThis approach facilitates code reuse and customization of distributed protocols in order to fit the needs of different applications .\nMoreover , protocol modules can be plugged in to the system dynamically .\nAll these features of protocol frameworks make them an interesting enabling technology for implementing adaptable systems [ 14 ] - an important class of applications .\nMost protocol frameworks are based on events ( all frameworks cited above are based on this abstraction ) .\nEvents are used for asynchronous communication between different modules on the same machine .\nFor instance , the composition of modules may require connectors to route events , which introduces burden for a protocol composer [ 4 ] .\nProtocol frameworks such as Appia and Eva extend the event-based approach with channels .\nHowever , in our opinion , this solution is not satisfactory since composition of complex protocol stacks becomes more difficult .\nIn this paper , we propose a new approach for building modular protocols , that is based on a service abstraction .\nWe compare this new approach with the common , event-based approach .\nWe show that protocol frameworks based on services have several advantages , e.g. allow for a fairly straightforward protocol composition , clear implementation , and better support of dynamic replacement of distributed protocols .\nTo validate our claims , we have implemented SAMOA -- an experimental protocol framework that is purely based on the service-based approach to module composition and implementation .\nThe framework allowed us to compare the service - and event-based implementations of an adaptive group communication middleware .\nSection 2 defines general notions .\nSection 3 presents the main characteristics of event-based frameworks , and features that are distinct for each framework .\nSection 4 describes our new approach , which is based on service abstraction .\nSection 5 discusses the advantages of a service-based protocol framework compared to an event-based protocol framework .\nThe description of our experimental implementation is presented in Section 6 .\nFinally , we conclude in Section 7 .\n7 .\nCONCLUSION\nIn the paper , we proposed a new approach to the protocol composition that is based on the notion of Service Interface , instead of events .\nWe believe that the service-based framework has several advantages over event-based frameworks .\nA prototype implementation allowed us to validate our ideas .", "lvl-2": "Service Interface : A New Abstraction for Implementing and Composing Protocols *\nABSTRACT\nIn this paper we compare two approaches to the design of protocol frameworks -- tools for implementing modular network protocols .\nThe most common approach uses events as the main abstraction for a local interaction between protocol modules .\nWe argue that an alternative approach , that is based on service abstraction , is more suitable for expressing modular protocols .\nIt also facilitates advanced features in the design of protocols , such as dynamic update of distributed protocols .\nWe then describe an experimental implementation of a service-based protocol framework in Java .\n1 .\nINTRODUCTION\nProtocol frameworks , such Cactus [ 5 , 2 ] , Appia [ 1 , 16 ] , Ensemble [ 12 , 17 ] , Eva [ 3 ] , SDL [ 8 ] and Neko [ 6 , 20 ] , are programming tools for developing modular network protocols .\nThey allow complex protocols to be implemented by decomposing them into several modules cooperating together .\nThis approach facilitates code reuse and customization of distributed protocols in order to fit the needs of different applications .\nMoreover , protocol modules can be plugged in to the system dynamically .\nAll these features of protocol frameworks make them an interesting enabling technology for implementing adaptable systems [ 14 ] - an important class of applications .\n* Research supported by the Swiss National Science Foundation under grant number 21-67715 .02 and Hasler Stiftung under grant number DICS-1825 .\nMost protocol frameworks are based on events ( all frameworks cited above are based on this abstraction ) .\nEvents are used for asynchronous communication between different modules on the same machine .\nHowever , the use of events raises some problems [ 4 , 13 ] .\nFor instance , the composition of modules may require connectors to route events , which introduces burden for a protocol composer [ 4 ] .\nProtocol frameworks such as Appia and Eva extend the event-based approach with channels .\nHowever , in our opinion , this solution is not satisfactory since composition of complex protocol stacks becomes more difficult .\nIn this paper , we propose a new approach for building modular protocols , that is based on a service abstraction .\nWe compare this new approach with the common , event-based approach .\nWe show that protocol frameworks based on services have several advantages , e.g. allow for a fairly straightforward protocol composition , clear implementation , and better support of dynamic replacement of distributed protocols .\nTo validate our claims , we have implemented SAMOA -- an experimental protocol framework that is purely based on the service-based approach to module composition and implementation .\nThe framework allowed us to compare the service - and event-based implementations of an adaptive group communication middleware .\nThe paper is organized as follows .\nSection 2 defines general notions .\nSection 3 presents the main characteristics of event-based frameworks , and features that are distinct for each framework .\nSection 4 describes our new approach , which is based on service abstraction .\nSection 5 discusses the advantages of a service-based protocol framework compared to an event-based protocol framework .\nThe description of our experimental implementation is presented in Section 6 .\nFinally , we conclude in Section 7 .\n2 .\nPROTOCOL FRAMEWORKS\nIn this section , we describe notions that are common to all protocol frameworks .\nProtocols and Protocol Modules .\nA protocol is a distributed algorithm that solves a specific problem in a distributed system , e.g. a TCP protocol solves the reliable channel problem .\nA protocol is implemented as a set of identical protocol modules located on different machines .\nProtocol Stacks .\nA stack is a set of protocol modules ( of different protocols ) that are located on the same machine .\nNote that , despite its name , a stack is not strictly layered ,\ni.e. a protocol module can interact with all other protocol modules in the same stack , not only with the protocol modules directly above and below .\nIn the remainder of this paper , we use the terms machine and stack interchangeably .\nNetwork\nFigure 1 : Example of a protocol stack\nIn Figure 1 , we show an example protocol stack .\nWe represent protocol modules by capital letters indexed with a natural number , e.g. P1 , Q1 , R1 and S1 .\nWe write Pi to denote the protocol module of a protocol P in stack i .\nWe use this notation throughout the paper .\nModules are represented as white boxes .\nArrows show module interactions .\nFor instance , protocol module P1 interacts with the protocol module Q1 and conversely ( See Fig. 1 ) .\nProtocol Module Interactions .\nBelow , we define the different kinds of interaction between protocol modules .\n\u2022 Requests are issued by protocol modules .\nA request by a protocol module Pi is an asynchronous call by Pi of another protocol module .\n\u2022 Replies are the results of a request .\nA single request can generate several replies .\nOnly protocol modules belonging to the same protocol as the module that has issued the request are concerned by the corresponding replies .\nFor example , a request by Pi generates replies that concern only protocol modules Pj .\n\u2022 Notifications can be used by a protocol module to inform ( possibly many ) protocol modules in the same stack about the occurrence of a specific event .\nNotifications may also be the results of a request .\n3 .\nEVENT-BASED PROTOCOL FRAMEWORK DESIGN\nMost existing protocol frameworks are event-based .\nExamples are Cactus [ 5 , 2 ] , Appia [ 1 , 16 ] and Ensemble [ 12 , 17 ] .\nIn this section , we define the notion of an event in protocol frameworks .\nWe also explain how protocol modules are structured in event-based frameworks .\nEvents .\nAn event is a special object for indirect communication between protocol modules in the same stack .\nEvents may transport some information , e.g. a network message or some other data .\nWith events , the communication is indirect , i.e. a protocol module that triggers an event is not aware of the module ( s ) that handle the event .\nEvents enable one-to-many communication within a protocol stack .\nTriggering an event can be done either synchronously or asynchronously .\nIn the former case , the thread that triggers an event e is blocked until all protocol modules that handle e have terminated handling of event e .\nIn the latter case , the thread that triggers the event is not blocked .\nProtocol Modules .\nIn event-based protocol frameworks , a protocol module consists of a set of handlers .\nEach handler is dedicated to handling of a specific event .\nHandlers of the same protocol module may share data .\nHandlers can be dynamically bound to events .\nHandlers can also be unbound dynamically .\nUpon triggering some event e , all handlers bound to e are executed .\nIf no handler is bound , the behavior is usually unspecified .\nFigure 2 : Example of an event-based protocol stack\nIn Figure 2 , we show an example of an event-based stack .\nEvents are represented by small letters , e.g. e , f , ... The fact that a protocol module can trigger an event is represented by an arrow starting from the module .\nA white trapezoid inside a module box represents a handler defined by the protocol module .\nTo mark that some handler is bound to event e , we use an arrow pointing to the handler ( the label on the arrow represents the event e ) .\nFor example , the protocol module P1 triggers event e and handles event f ( see Fig. 2 ) .\nNote that the network is represented as a special protocol module that handles the send event ( to send a message to another machine ) and triggers the deliver event ( upon receipt of a message from another machine ) .\nSpecific Features .\nSome protocol frameworks have unique features .\nBelow , we present the features that influence composition and implementation of protocol modules .\nIn Cactus [ 5 , 2 ] , the programmer can give a priority number to a handler upon binding it to an event .\nWhen an event is triggered , all handlers are executed following the order of priority .\nA handler h is also able to cancel the execution of an event trigger : all handlers that should be executed after h according to the priority are not executed .\nAppia [ 1 , 16 ] and Eva [ 3 ] introduce the notion of channels .\nChannels allow to build routes of events in protocol stacks .\nEach protocol module has to subscribe to one or many channels .\nAll events are triggered by specifying a channel they belong to .\nWhen a protocol module triggers an event e specifying channel c , all handlers bound to e that are part of a protocol that subscribes to c are executed ( in the order prescribed by the definition of channel c ) .\n4 .\nSERVICE-BASED PROTOCOL FRAMEWORK\nIn this section , we describe our new approach for implementing and composing protocols that is based on services .\nWe show in Section 5 the advantages of service-based protocol frameworks over event-based protocol frameworks .\nService Interface .\nIn our service-based framework , protocol modules in the same stack communicate through objects called service interfaces .\nRequests , replies and notifications are all issued to service interfaces .\nProtocol Modules .\nA protocol module is a set of executers , listeners and interceptors .\nExecuters handle requests .\nAn executer can be dynamically bound to a service interface .\nIt can be later unbound .\nA request issued to a service interface si leads to the execution of the executer bound to si .\nIf no executer is bound to si , the request is delayed until some executer is bound to si .\nContrary to events , at most one executer at any time can be bound to a service interface on every machine .\nListeners handle replies and notifications .\nA listener can be dynamically bound and unbound to/from a service interface si .\nA notification issued to a service interface si is handled by all listeners bound to si in the local stack .\nA reply issued to a service interface is handled by one single listener .\nTo ensure that one single listener handles a reply , a module Pi has to identify , each time it issues a request , the listener to handle the possible reply .\nIf the request and the reply occur respectively , in stack i and in stack j , the service interface si on i communicates to the service interface si ' on j the listener that must handle the reply .\nIf the listener that must handle the reply does not exist , the reply is delayed until the listener is created .\nNetwork\nFigure 3 : Example of a service-based protocol stack\nIn Figure 3 , we show an example of a service-based stack .\nWe denote a service interface by a small letter ( e.g. t , u and nt ) in a hexagonal box .\nThe fact that a module Pi can generate a request to a service interface si is represented by a dashed black arrow going from Pi to si .\nSimilarly , a dashed white arrow going from module Pi to service interface si represents the fact that Pi can generate a reply or a notification to si .\nWe represent executers with white boxes inside protocol modules and listeners with white boxes with a gray border .\nA connecting line between a service interface si and an executer e ( resp .\na listener l ) shows that e ( resp .\nl ) is bound to si .\nIn Figure 3 , module Q1 contains an executer bound to service interface t and a listener bound to service interface u. Module Q1 can generate replies and notifications to service interface t and requests to service interface u. Note that the service interface nt allows to access the network .\nFigure 4 : Execution of protocol interactions with interceptors\nAn interceptor plays a special r\u02c6ole .\nSimilarly to executers , interceptors can be dynamically bound or unbound to a service interface .\nThey are activated each time a request , a reply or a notification is issued to the service interface they are bound to .\nThis is illustrated in Figure 4 .\nIn the right part of the figure , the interceptor of the protocol module T1 is represented by a rounded box .\nThe interceptor is bound to service interface t .\nThe left part of the figure shows that an interceptor can be seen as an executer plus a listener .\nWhen P1 issues a request req to the service interface t , the executer-interceptor of T1 is executed .\nThen , module T1 may forward a request req ' to the service interface t , where we can have req = 6 req ' 1 .\nWhen module Q1 issues a reply or a notification , a similar mechanism is used , except that this time the listener-interceptor of T1 is executed .\nNote that a protocol module Ti , that has an interceptor bound to a service interface , is able to modify requests , replies and notifications .\nUpon requests , if several interceptors are bound to the same service interface , they are executed in the order of binding .\nUpon replies and notifications , the order is reversed .\n5 .\nADVANTAGES OF SERVICE-BASED PROTOCOL FRAMEWORK DESIGN\nWe show in this section the advantages of service-based protocol frameworks over event-based protocol frameworks .\nWe structure our discussion in three parts .\nFirstly , we present how protocol interactions are modeled in each of the protocol frameworks .\nThen , we discuss the composition of protocol modules in each of these frameworks .\nFinally , we present the problem of dynamic protocol replacement and the advantages of service interfaces in order to implement it .\nThe discussion is summarized in Table 1 .\n5.1 Protocol Module Interactions\nA natural model of protocol interactions ( as presented in Section 2 ) facilitates the implementation of protocol modules .\nFor each protocol interaction , we show how it is modeled in both frameworks .\nWe also explain that an inadequate model may lead to problems .\nRequests .\nIn service-based frameworks , a request is generated to a service interface .\nEach request is handled by at most one executer , since we allow only one executer to be bound to a service interface at any time .\nOn the other hand , in event-based frameworks , a protocol module emulates a request by triggering an event .\nThere is no guarantee\nthat this event is bound to only one handler , which may lead to programming errors .\nReplies .\nWhen a protocol module generates a reply in a service-based framework , only the correct listener ( identified at the time the corresponding request was issued ) is executed .\nThis ensures that a request issued by some protocol module Qi , leads to replies handled by protocol modules Qj ( i.e. protocol modules of the same protocol ) .\nThis is not the case in event-based frameworks , as we now show .\nConsider protocol module Q , in Figure 2 that triggers event g to emulate a request .\nModule S , handles the request .\nWhen modules Si triggers event h to emulate a reply ( remember that a reply can occur in many stacks ) , both modules Qi and Ri will handle the reply ( they both contain a handler bound to h ) .\nThis behavior is not correct : only protocol modules Qi should handle the reply .\nMoreover , as modules Ri are not necessarily implemented to interact with modules Qi , this behavior may lead to errors .\nSolutions to solve this problem exist .\nHowever , they introduce an unnecessary burden on the protocol programmers and the stack composer .\nFor instance , channels allow to route events to ensure that modules handle only events concerning them .\nHowever , the protocol programmer must take channels into account when implementing protocols .\nMoreover , the composition of complex stacks becomes more difficult due to the fact that the composer has to create many channels to ensure that modules handle events correctly .\nAn addition of special protocol modules ( named connectors ) for routing events is also not satisfactory , since it requires additional work from the composer and introduces overhead .\nNotifications .\nContrary to requests and replies , notifications are well modeled in event-based frameworks .\nThe reason is that notifications correspond to the one-to-many communication scheme provided by events .\nIn service-based frameworks , notifications are also well modeled .\nWhen a module generates a notification to a service interface si , all listeners bound to s are executed .\nNote that in this case , service interfaces provide the same pattern of communication as events .\n5.2 Protocol Module Composition\nReplies ( and sometimes notifications ) are the results of a request .\nThus , there is a semantic link between them .\nThe composer of protocol modules must preserve this link in order to compose correct stacks .\nWe explain now that service based frameworks provide a mechanism to preserve this link , while in event-based frameworks , the lack of such mechanism leads to error-prone composition .\nIn service-based frameworks , requests , replies and notifications are issued to a service interface .\nThus , a service interface introduces a link between these interactions .\nTo compose a correct stack , the composer has to bound a listener to service interface si for each module that issues a request to si .\nThe same must be done for one executer that is part of a module that issues replies or notifications .\nApplying this simple methodology ensures that every request issued to a service interface si eventually results in several replies or notifications issued to the same service interface si .\nIn event-based frameworks , all protocol interactions are issued through different events : there is no explicit link between an event triggered upon requests and an event triggered upon the corresponding replies .\nThus , the composer of a protocol stack must know the meaning of each event in order to preserve the semantic link between replies ( and notifications ) and requests .\nMoreover , nothing prevents from binding a handler that should handle a request to an event used to issue a reply .\nNote that these problems can be partially solved by typing events and handlers .\nHowever , it does not prevent from errors if there are several instances of the same event type .\nNote that protocol composition is clearer in the protocol frameworks that are based on services , rather than on events .\nThe reason is that several events that are used to model different protocol interactions can be modeled by a single service interface .\n5.3 Dynamic Replacement of Protocols\nDynamic replacement of protocols consists in switching on-the-fly between protocols that solve the same problem .\nReplacement of a protocol P by a new protocol newP means that a protocol module Pi is replaced by newPi in every stack i .\nThis replacement is problematic since the local replacements ( within stacks ) must be synchronized in order to guarantee protocol correctness [ 21 , 18 ] .\nFigure 5 : Dynamic replacement of protocol P\nFor the synchronization algorithms to work , module interactions are intercepted in order to detect a time when Pi should be replaced by newPi .\n( Other solutions , e.g. in [ 11 ] , are more complex . )\nIn Fig. 5 , we show how this interception can be implemented in protocol frameworks that are based on services ( in the left part of the figure ) and events ( in the right part of the figure ) .\nThe two-sided arrows point to the protocol modules P , and newP , that are switched .\nIt can be seen that the approach that uses the Service Interface mechanism has advantages .\nThe intercepting module Repl-P , has an interceptor bound to service interface t that intercepts every request handled by modules P , and all replies and notifications issued by P , .\nThe code of the module P , can therefore remain unchanged .\nIn event-based frameworks , the solution is to add an intermediate module Repl-P , that intercepts the requests issued to P , and also the replies and notifications issued by P , .\nAlthough this ad-hoc solution may seem similar to the servicebased approach , there is an important difference .\nThe eventbased solution requires to slightly modify the module P , since instead of handling event g and triggering event h , P , must now handle different events g ' and h ' ( see Fig. 5 ) .\n6 .\nIMPLEMENTATION\nWe have implemented an experimental service-based protocol framework ( called SAMOA ) [ 7 ] .\nOur implementation is light-weight : it consists of approximately 1200 lines of code in Java 1.5 ( with generics ) .\nIn this section , we describe the main two classes of our implementation : Service ( encoding the Service Interface ) and\nTable 1 : Service-based vs. event-based\nProtocol ( encoding protocol modules ) .\nFinally , we present an example protocol stack that we have implemented to validate the service-based approach .\nThe Service Class .\nA Service object is characterized by the arguments of requests and the arguments of responses .\nA response is either a reply or a notification .\nA special argument , called message , determines the kind of interactions modeled by the response .\nA message represents a piece of information sent over the network .\nWhen a protocol module issues a request , it can give a message as an argument .\nThe message can specify the listener that must handle the reply .\nWhen a protocol module issues a response to a service interface , a reply is issued if one of the arguments of the response is a message specifying a listener .\nOtherwise , a notification is issued .\nExecuters , listeners and interceptors are encoded as innerclasses of the Service class .\nThis allows to provide type-safe protocol interactions .\nFor instance , executers can only be bound to the Service object , they belong to .\nThus , the parameters passed to requests ( that are verified statically ) always correspond to the parameters accepted by the corresponding executers .\nThe type of a Service object is determined by the type of the arguments of requests and responses .\nA Service object t is compatible with another Service object s if the type of the arguments of requests ( and responses ) of t is a subtype of the arguments of requests ( and responses ) of s .\nIn practice , if a protocol module Pi can issue a request to a protocol UDP , then it may also issue a request to TCP ( compatible with UDP ) due to the subtyping relation on parameters of communicating modules .\nThe Protocol Class .\nA Protocol object consists of three sets of components , one set for each component type ( a listener , an executer , and an interceptor ) .\nProtocol objects are characterized by names to retrieve them easily .\nMoreover , we have added some features to bind and unbind all executers or interceptors to/from the corresponding Service objects .\nProtocol objects can be loaded to a stack dynamically .\nAll these features made it easy to implement dynamic replacement of network protocols .\nProtocol Stack Implementation .\nTo validate our ideas , we have developed an Adaptive Group Communication ( AGC ) middleware , adopting both the service - and the event-based approaches .\nFig. 6 shows the corresponding stacks of the AGC middleware .\nBoth stacks allow the Consensus and Atomic Broadcast protocols to be dynamically updated .\nThe architecture of our middleware , shown in Fig. 6 , builds on the group communication stack described in [ 15 ] .\nThe UDP and RP2P modules provide respectively , unreliable and reliable point-to-point transport .\nThe FD module implements a failure detector ; we assume that it ensures the\nFigure 6 : Adaptive Group Communication Middleware : service-based ( left ) vs. event-based ( right )\nproperties of the \u2738 S failure detector [ 9 ] .\nThe CT module provides a distributed consensus service using the ChandraToueg algorithm [ 10 ] .\nThe ABc .\nmodule implements atomic broadcast -- a group communication primitive that delivers messages to all processes in the same order .\nThe GM module provides a group membership service that maintains consistent membership data among group members ( see [ 19 ] for details ) .\nThe Repl ABc .\nand the Repl CT modules implement the replacement algorithms [ 18 ] for , respectively , the ABc .\nand the CT protocol modules .\nNote that each arrow in the event-based architecture represents an event .\nWe do not name events in the figure for readability .\nThe left stack in Figure 6 shows the implementation of AGC with our service-based framework .\nThe right stack shows the same implementation with an event-based framework .\nPerformance Evaluation .\nTo evaluate the overhead of service interfaces , we compared performance of the serviceand event-based implementations of the AGC middleware .\nThe latter implementation of AGC uses the Cactus protocol framework [ 5 , 2 ] .\nIn our experiment , we compared the average latency of Atomic Broadcast ( ABcast ) , which is defined as follows .\nConsider a message m sent using ABcast .\nWe denote by ti ( m ) the time between the moment of sending m and the moment of delivering m on a machine ( stack ) i .\nWe define the average latency of m as the average of ti ( m ) for all machines ( stacks ) i within a group of stacks .\nPerformance tests have been made using a cluster of PCs running Red Hat Linux 7.2 , where each PC has a Pentium III 766 MHz processor and 128MB of RAM .\nAll PCs are interconnected by a 100 Base-TX duplex Ethernet hub .\nOur experiment has involved 7 machines ( stacks ) that ABcast messages of 4Mb under a constant load , where a load is a number of messages per second .\nIn Figure 7 , we show the results of our experiment for different loads .\nLatencies are shown on the vertical axis , while message loads are shown on the horizontal axis .\nThe solid line shows the results obtained with our service-based framework .\nThe dashed line shows the results obtained with the Cactus framework .\nThe Network Network\nFigure 7 : Comparison between our service-based framework and Cactus\noverhead of the service-based framework is approximately 10 % .\nThis can be explained as follows .\nFirstly , the servicebased framework provides a higher level abstraction , which has a small cost .\nSecondly , the AGC middleware was initially implemented and optimized for the event-based Cactus framework .\nHowever , it is possible to optimize the AGC middleware for the service-based framework .\n7 .\nCONCLUSION\nIn the paper , we proposed a new approach to the protocol composition that is based on the notion of Service Interface , instead of events .\nWe believe that the service-based framework has several advantages over event-based frameworks .\nIt allows us to : ( 1 ) model accurately protocol interactions , ( 2 ) reduce the risk of errors during the composition phase , and ( 3 ) simply implement dynamic protocol updates .\nA prototype implementation allowed us to validate our ideas ."} {"id": "J-15", "title": "", "abstract": "", "keyphrases": ["auction", "multiattribut auction", "prefer handl", "measur valu function theori", "iter auction mechan", "mvf", "gau", "gai base auction"], "prmu": [], "lvl-1": "Generalized Value Decomposition and Structured Multiattribute Auctions Yagil Engel and Michael P. Wellman University of Michigan, Computer Science & Engineering 2260 Hayward St, Ann Arbor, MI 48109-2121, USA {yagil,wellman}@umich.\nedu ABSTRACT Multiattribute auction mechanisms generally either remain agnostic about traders'' preferences, or presume highly restrictive forms, such as full additivity.\nReal preferences often exhibit dependencies among attributes, yet may possess some structure that can be usefully exploited to streamline communication and simplify operation of a multiattribute auction.\nWe develop such a structure using the theory of measurable value functions, a cardinal utility representation based on an underlying order over preference differences.\nA set of local conditional independence relations over such differences supports a generalized additive preference representation, which decomposes utility across overlapping clusters of related attributes.\nWe introduce an iterative auction mechanism that maintains prices on local clusters of attributes rather than the full space of joint configurations.\nWhen traders'' preferences are consistent with the auction``s generalized additive structure, the mechanism produces approximately optimal allocations, at approximate VCG prices.\nCategories and Subject Descriptors: J.4 [Computer Applications]: Social and Behavioral Sciences-Economics General Terms: Algorithms, Economics 1.\nINTRODUCTION Multiattribute trading mechanisms extend traditional, price-only mechanisms by facilitating the negotiation over a set of predefined attributes representing various non-price aspects of the deal.\nRather than negotiating over a fully defined good or service, a multiattribute mechanism delays commitment to specific configurations until the most promising candidates are identified.\nFor example, a procurement department of a company may use a multiattribute auction to select a supplier of hard drives.\nSupplier offers may be evaluated not only over the price they offer, but also over various qualitative attributes such as volume, RPM, access time, latency, transfer rate, and so on.\nIn addition, suppliers may offer different contract conditions such as warranty, delivery time, and service.\nIn order to account for traders'' preferences, the auction mechanism must extract evaluative information over a complex domain of multidimensional configurations.\nConstructing and communicating a complete preference specification can be a severe burden for even a moderate number of attributes, therefore practical multiattribute auctions must either accommodate partial specifications, or support compact expression of preferences assuming some simplified form.\nBy far the most popular multiattribute form to adopt is the simplest: an additive representation where overall value is a linear combination of values associated with each attribute.\nFor example, several recent proposals for iterative multiattribute auctions [2, 3, 8, 19] require additive preference representations.\nSuch additivity reduces the complexity of preference specification exponentially (compared to the general discrete case), but precludes expression of any interdependencies among the attributes.\nIn practice, however, interdependencies among natural attributes are quite common.\nFor example, the buyer may exhibit complementary preferences for size and access time (since the performance effect is more salient if much data is involved), or may view a strong warranty as a good substitute for high reliability ratings.\nSimilarly, the seller``s production characteristics (such as increasing access time is harder for larger hard drives) can easily violate additivity.\nIn such cases an additive value function may not be able to provide even a reasonable approximation of real preferences.\nOn the other hand, fully general models are intractable, and it is reasonable to expect multiattribute preferences to exhibit some structure.\nOur goal, therefore, is to identify the subtler yet more widely applicable structured representations, and exploit these properties of preferences in trading mechanisms.\nWe propose an iterative auction mechanism based on just such a flexible preference structure.\nOur approach is inspired by the design of an iterative multiattribute procurement auction for additive preferences, due to Parkes and Kalagnanam (PK) [19].\nPK propose two types of iterative auctions: the first (NLD) makes no assumptions about traders'' preferences, and lets sellers bid on the full multidimensional attribute space.\nBecause NLD maintains an exponential price structure, it is suitable only for small domains.\nThe other auction (AD) assumes additive buyer valuation and seller cost functions.\nIt collects sell bids per attribute level and for a single discount term.\nThe price of a configuration is defined as the sum of the prices of the chosen attribute levels minus the discount.\nThe auction we propose also supports compact price spaces, albeit for levels of clusters of attributes rather than singletons.\nWe employ a preference decomposition based on generalized additive independence (GAI), a model flexible enough to accommodate interdependencies to the exact degree of accuracy desired, yet providing a compact functional form to the extent that interdependence can be limited.\nGiven its roots in multiattribute utility theory [13], 227 the GAI condition is defined with respect to the expected utility function.\nTo apply it for modeling values for certain outcomes, therefore, requires a reinterpretation for preference under certainty.\nTo this end, we exploit the fact that auction outcomes are associated with continuous prices, which provide a natural scale for assessing magnitude of preference.\nWe first lay out a representation framework for preferences that captures, in addition to simple orderings among attribute configuration values, the difference in the willingness to pay (wtp) for each.\nThat is, we should be able not only to compare outcomes but also decide whether the difference in quality is worth a given difference in price.\nNext, we build a direct, formally justified link from preference statements over priced outcomes to a generalized additive decomposition of the wtp function.\nAfter laying out this infrastructure, we employ this representation tool for the development of a multiattribute iterative auction mechanism that allows traders to express their complex preferences in GAI format.\nWe then study the auction``s allocational, computational, and practical properties.\nIn Section 2 we present essential background on our representation framework, the measurable value function (MVF).\nSection 3 develops new multiattribute structures for MVF, supporting generalized additive decompositions.\nNext, we show the applicability of the theoretical framework to preferences in trading.\nThe rest of the paper is devoted to the proposed auction mechanism.\n2.\nMULTIATTRIBUTE PREFERENCES As mentioned, most tools facilitating expression of multiattribute value for trading applications assume that agents'' preferences can be represented in an additive form.\nBy way of background, we start by introducing the formal prerequisites justifying the additive representation, as provided by multiattribute utility theory.\nWe then present the generalized additive form, and develop the formal underpinnings for measurable value needed to extend this model to the case of choice under certainty.\n2.1 Preferential Independence Let \u0398 denote the space of possible outcomes, with a preference relation (weak total order) over \u0398.\nLet A = {a0, ... , am} be a set of attributes describing \u0398.\nCapital letters denote subsets of variables, small letters (with or without numeric subscripts) denote specific variables, and \u00afX denotes the complement of X with respect to A.\nWe indicate specific variable assignments with prime signs or superscripts.\nTo represent an instantiation of subsets X, Y at the same time we use a sequence of instantiation symbols, as in X Y .\nDEFINITION 1.\nA set of attributes Y \u2282 A is preferentially independent (PI) of its complement Z = A \\ Y if the conditional preference order over Y given a fixed level Z0 of Z is the same regardless of the choice of Z0 .\nIn other words, the preference order over the projection of A on the attributes in Y is the same for any instantiation of the attributes in Z. DEFINITION 2.\nA = {a1, ... , am} is mutually preferentially independent (MPI) if any subset of A is preferentially independent of its complement.\nThe preference relation when no uncertainty is modeled is usually represented by a value function v [17].\nThe following fundamental result greatly simplifies the value function representation.\nTHEOREM 1 ([9]).\nA preference order over set of attributes A has an additive value function representation v(a1, ... , am) = mX i=1 vi(ai) iff A is mutually preferential independent.\nEssentially, the additive forms used in trading mechanisms assume mutual preferential independence over the full set of attributes, including the money attribute.\nIntuitively that means that willingness to pay for value of an attribute or attributes cannot be affected by the value of other attributes.\nA cardinal value function representing an ordering over certain outcomes need not in general coincide with the cardinal utility function that represents preference over lotteries or expected utility (EU).\nNevertheless, EU functions may possess structural properties analogous to that for value functions, such as additive decomposition.\nSince the present work does not involve decisions under uncertainty, we do not provide a full exposition of the EU concept.\nHowever we do make frequent reference to the following additive independence relations.\nDEFINITION 3.\nLet X, Y, Z be a partition of the set of attributes A. X and Y are conditionally additive independent given Z, denoted as CAI(X, Y | Z), if preferences over lotteries on A depend only on their marginal conditional probability distributions over X and Y .\nDEFINITION 4.\nLet I1, ... , Ig \u2286 A such that Sg i=1 Ii = A. I1, ... , Ig are called generalized additive independent (GAI) if preferences over lotteries on A depend only on their marginal distributions over I1, ... , Ig.\nAn (expected) utility function u(\u00b7) can be decomposed additively according to its (possibly overlapping) GAI sub-configurations.\nTHEOREM 2 ([13]).\nLet I1, ... , Ig be GAI.\nThen there exist functions f1, ... , fg such that u(a1, ... , am) = g X r=1 fr(Ir).\n(1) What is now known as the GAI condition was originally introduced by Fishburn [13] for EU, and was named GAI and brought to the attention of AI researchers by Bacchus and Grove [1].\nGraphical models and elicitation procedures for GAI decomposable utility were developed for EU [4, 14, 6], for a cardinal representation of the ordinal value function [15], and for an ordinal preference relations corresponding to a TCP-net structure by Brafman et al. [5].\nApart from the work on GAI in the context of preference handling that were discussed above, GAI have been recently used in the context of mechanism design by Hyafil and Boutilier [16], as an aid in direct revelation mechanisms.\nAs shown by Bacchus and Grove [1], GAI structure can be identified based on a set of CAI conditions, which are much easier to detect and verify.\nIn general, utility functions may exhibit GAI structure not based on CAI.\nHowever, to date all proposals for reasoning and eliciting utility in GAI form take advantage of the GAI structure primarily to the extent that it represents a collection of CAI conditions.\nFor example, GAI trees [14] employ triangulation of the CAI map, and Braziunas and Boutilier``s [6] conditional set Cj of a set Ij corresponds to the CAI separating set of Ij.\nSince the CAI condition is also defined based on preferences over lotteries, we cannot apply Bacchus and Grove``s result without first establishing an alternative framework based on priced outcomes.\nWe develop such a framework using the theory of measurable value functions, ultimately producing a GAI decomposition 228 (Eq.\n1) of the wtp function.\nReaders interested primarily in the multiattribute auction and willing to grant the well-foundedness of the preference structure may skip down to Section 5.\n2.2 Measurable Value Functions Trading decisions represent a special case of decisions under certainty, where choices involve multiattribute outcomes and corresponding monetary payments.\nIn such problems, the key decision often hinges on relative valuations of price differences compared to differences in alternative configurations of goods and services.\nTheoretically, price can be treated as just another attribute, however, such an approach fails to exploit the special character of the money dimension, and can significantly add to complexity due to the inherent continuity and typical wide range of possible monetary outcome values.\nWe build on the fundamental work of Dyer and Sarin [10, 11] on measurable value functions (MVFs).\nAs we show below, wtp functions in a quasi-linear setting can be interpreted as MVFs.\nHowever we first present the MVF framework in a more generic way, where the measurement is not necessarily monetary.\nWe present the essential definitions and refer to Dyer and Sarin for more detailed background and axiomatic treatment.\nThe key concept is that of preference difference.\nLet \u03b81 , \u03b82 , \u03d11 , \u03d12 \u2208 \u0398 such that \u03b81 \u03b82 and \u03d11 \u03d12 .\n[\u03b82 , \u03b81 ] denotes the preference difference between \u03b82 and \u03b81 , interpreted as the strength, or degree, to which \u03b82 is preferred over \u03b81 .\nLet \u2217 denote a preference order over \u0398 \u00d7 \u0398.\nWe interpret the statement [\u03b82 , \u03b81 ] \u2217 [\u03d12 , \u03d11 ] as the preference of \u03d12 over \u03d11 is at least as strong as the preference of \u03b82 over \u03b81 .\nWe use the symbol \u223c\u2217 to represent equality of preference differences.\nDEFINITION 5.\nu : D \u2192 is a measurable value function (MVF) wrt \u2217 if for any \u03b81 , \u03b82 , \u03d11 , \u03d12 \u2208 D, [\u03b82 , \u03b81 ] \u2217 [\u03d12 , \u03d11 ] \u21d4 u(\u03b82 ) \u2212 u(\u03b81 ) \u2264 u(\u03d12 ) \u2212 u(\u03d11 ).\nNote that an MVF can also be used as a value function representing , since [\u03b8 , \u03b8] \u2217 [\u03b8 , \u03b8] iff \u03b8 \u03b8 .\nDEFINITION 6 ([11]).\nAttribute set X \u2282 A is called difference independent of \u00afX if for any two assignments X1 \u00afX X2 \u00afX , [X1 \u00afX , X2 \u00afX ] \u223c\u2217 [X1 \u00afX , X2 \u00afX ] for any assignment \u00afX .\nOr, in words, the preference differences on assignments to X given a fixed level of \u00afX do not depend on the particular level chosen for \u00afX.\nAs with additive independence for EU, this condition is stronger than preferential independence of X. Also analogously to EU, mutual preferential independence combined with other conditions leads to additive decomposition of the MVF.\nMoreover, Dyer and Sarin [11] have defined analogs of utility independence [17] for MVF, and worked out a parallel set of decomposition results.\n3.\nADVANCED MVF STRUCTURES 3.1 Conditional Difference Independence Our first step is to generalize Definition 6 to a conditional version.\nDEFINITION 7.\nLet X, Y, Z be a partition of the set of attributes A. X is conditionally difference independent of Y given Z, denoted as CDI(X, Y | Z), if \u2200 instantiations \u02c6Z, X1 , X2 , Y 1 , Y 2 [X1 Y 1 \u02c6Z, X2 Y 1 \u02c6Z] \u223c [X1 Y 2 \u02c6Z, X2 Y 2 \u02c6Z].\nSince the conditional set is always the complement, we sometimes leave it implicit, using the abbreviated notation CDI(X, Y ).\nCDI leads to a decomposition similar to that obtained from CAI [17].\nLEMMA 3.\nLet u(A) be an MVF representing preference differences.\nThen CDI(X, Y | Z) iff u(A) = u(X0 , Y, Z) + u(X, Y 0 , Z) \u2212 u(X0 , Y 0 , Z).\nTo complete the analogy with CAI, we generalize Lemma 3 as follows.\nPROPOSITION 4.\nCDI(X, Y | Z) iff there exist functions \u03c81(X, Z) and \u03c82(Y, Z), such that u(X, Y, Z) = \u03c81(X, Z) + \u03c82(Y, Z).\n(2) An immediate result of Proposition 4 is that CDI is a symmetric relation.\nThe conditional independence condition is much more applicable than the unconditional one.\nFor example, if attributes a \u2208 X and b /\u2208 X are complements or substitutes, X cannot be difference independent of \u00afX. However, X \\ {a} may still be CDI of \u00afX given a. 3.2 GAI Structure for MVF A single CDI condition decomposes the value function into two parts.\nWe seek a finer-grain global decomposition of the utility function, similar to that obtained from mutual preferential independence.\nFor this purpose we are now ready to employ the results of Bacchus and Grove [1], who establish that the CAI condition has a perfect map [20]; that is, there exists a graph whose nodes correspond to the set A, and its node separation reflects exactly the complete set of CAI conditions on A. Moreover, they show that the utility function decomposes over the set of maximal cliques of the perfect map.\nTheir proofs can be easily adapted to CDI, since they only rely on the decomposition property of CAI that is also implied by CDI according to Proposition 4.\nTHEOREM 5.\nLet G = (A, E) be a perfect map for the CDI conditions on A.\nThen u(A) = g X r=1 fr(Ir), (3) where I1, ... , Ig are (overlapping) subsets of A, each corresponding to a maximal clique of G. Given Theorem 5, we can now identify an MVF GAI structure from a collection of CDI conditions.\nThe CDI conditions, in turn, are particularly intuitive to detect when the preference differences carry a direct interpretation, as in the case with monetary differences discussed below.\nMoreover, the assumption or detection of CDI conditions can be performed incrementally, until the MVF is decomposed to a reasonable dimension.\nThis is in contrast with the fully additive decomposition of MVF that requires mutual preferential independence [11].\nTheorem 5 defines a decomposition structure, but to represent the actual MVF we need to specify the functions over the cliques.\n229 The next theorem establishes that the functional constituents of MVF are the same as those for GAI decompositions as defined by Fishburn [13] for EU.\nWe adopt the following conventional notation.\nLet (a0 1, ... , a0 m) be a predefined vector called the reference outcome.\nFor any I \u2286 A, the function u([I]) stands for the projection of u(A) to I where the rest of the attributes are fixed at their reference levels.\nTHEOREM 6.\nLet G = (A, E) be a perfect map for the CDI condition on A, and {I1, ... , Ig} a set of maximal cliques as defined in Theorem 5.\nThen the functional decomposition from that theorem can be defined as f1 = u([I1]), and for r = 2, ... , g (4) fr = u([Ir]) + r\u22121X k=1 (\u22121)k X 1\u2264i1<\u00b7\u00b7\u00b7 fb,r(\u03b8r) for all \u03b8r.\nThe discount \u0394 is initialized to zero.\nThe auction has the dynamics of a descending clock auction: at each round t, bids are collected for current prices and then prices are reduced according to price rules.\nA seller is considered active in a round if she submits at least one full bid.\nIn round t > 1, only sellers who where active in round t \u2212 1 are allowed to participate, and the auction terminates when no more than a single seller is active.\nWe denote the set of sub-bids submitted by si by Bt i , and the corresponding set of full bids is Bt i = {\u03b8 = (\u03b81, ... , \u03b8g) \u2208 \u0398 | \u2200r.\u03b8r \u2208 Bt i }.\nIn our example, a seller could submit sub-bids on a set of subconfigurations such as a1 b1 and b1 c1 , and that combines to a full bid on a1 b1 c1 .\nThe auction proceeds in two phases.\nIn the first phase (A), at each round t the auction computes a set of preferred sub-configurations Mt .\nSection 5.4 shows how to define Mt to ensure convergence, and Section 5.5 shows how to efficiently compute it.\nIn phase A, the auction adjusts prices after each round, reducing the price of every sub-configuration that has received a bid but is not in the preferred set.\nLet be the prespecified price increment parameter.\nSpecifically, the phase A price change rule is applied to all \u03b8r \u2208 Sn i=1 Bt i \\ Mt : pt+1 (\u03b8r) \u2190 max(pt (\u03b8r) \u2212 g , fb,r(\u03b8r)).\n[A] The RHS maximum ensures that prices do not get reduced below the buyer``s valuation in phase A. Let Mt denote the set of configurations that are consistent covers in Mt : Mt = {\u03b8 = (\u03b81, ... , \u03b8g) \u2208 \u0398 | \u2200r.\u03b8r \u2208 Mt } The auction switches to phase B when all active sellers have at least one full bid in the buyer``s preferred set: \u2200i. Bt i = \u2205 \u2228 Bt i \u2229 Mt = \u2205.\n[SWITCH] Let T be the round at which [SWITCH] becomes true.\nAt this point, the auction selects the buyer-optimal full bid \u03b7i for each seller si.\n\u03b7i = arg max \u03b8\u2208BT i (ub(\u03b8) \u2212 pT (\u03b8)).\n(6) In phase B, si may bid only on \u03b7i.\nThe prices of sub-configurations are fixed at pT (\u00b7) during this phase.\nThe only adjustment in phase B is to \u0394, which is increased in every round by .\nThe auction terminates when at most one seller (if exactly one, designate it s\u02c6i) is active.\nThere are four distinct cases: 1.\nAll sellers drop out in phase A (i.e., before rule [SWITCH] holds).\nThe auction returns with no allocation.\n6 The discount term could be replaced with a uniform price reduction across all sub-configurations.\n2.\nAll active sellers drop out in the same round in phase B.\nThe auction selects the best seller (s\u02c6i) from the preceding round, and applies the applicable case below.\n3.\nThe auction terminates in phase B with a final price above buyer``s valuation, pT (\u03b7\u02c6i) \u2212 \u0394 > ub(\u03b7\u02c6i).\nThe auction offers the winner s\u02c6i an opportunity to supply \u03b7\u02c6i at price ub(\u03b7\u02c6i).\n4.\nThe auction terminates in phase B with a final price pT (\u03b7\u02c6i)\u2212 \u0394 \u2264 ub(\u03b7\u02c6i).\nThis is the ideal situation, where the auction allocates the chosen configuration and seller at this resulting price.\nThe overall auction is described by high-level pseudocode in Algorithm 1.\nAs explained in Section 5.4, the role of phase A is to guide the traders to their efficient configurations.\nPhase B is a one-dimensional competition over the surplus that remaining seller candidates can provide to the buyer.\nIn Section 5.5 we discuss the computational tasks associated with the auction, and Section 5.6 provides a detailed example.\nAlgorithm 1 GAI-based multiattribute auction collect a reported valuation, \u02c6v from the buyer set high initial prices, p1 (\u03b8r) on each level \u03b8r, and set \u0394 = 0 while not [SWITCH] do collect sub-bids from sellers compute Mt apply price change by [A] end while compute \u03b7i while more than one active seller do increase \u0394 by collect bids on (\u03b7i, \u0394) from sellers end while implement allocation and payment to winning seller 5.4 Economic Analysis When the optimal solution to MAP (5) provides negative welfare and sellers do not bid below their cost, the auction terminates in phase A, no trade occurs and the auction is trivially efficient.\nWe therefore assume throughout the analysis that the optimal (seller,configuration) pair provides non-negative welfare.\nThe buyer profit from a configuration \u03b8 is defined as7 \u03c0b(\u03b8) = ub(\u03b8) \u2212 p(\u03b8) and similarly \u03c0i(\u03b8) = p(\u03b8) \u2212 ci(\u03b8) is the profit of si.\nIn addition, for \u03bc \u2286 {1, ... , g} we denote the corresponding set of subconfigurations by \u03b8\u03bc, and define the profit from a configuration \u03b8 over the subset \u03bc as \u03c0b(\u03b8\u03bc) = X r\u2208\u03bc (fb,r(\u03b8r) \u2212 p(\u03b8r)).\n\u03c0i(\u03b8\u03bc) is defined similarly for si.\nCrucially, for any \u03bc and its complement \u00af\u03bc and for any trader \u03c4, \u03c0\u03c4 (\u03b8) = \u03c0\u03c4 (\u03b8\u03bc) + \u03c0\u03c4 (\u03b8\u00af\u03bc).\nThe function \u03c3i : \u0398 \u2192 R represents the welfare, or surplus function ub(\u00b7) \u2212 ci(\u00b7).\nFor any price system p, \u03c3i(\u03b8) = \u03c0b(\u03b8) + \u03c0i(\u03b8).\n7 We drop the t superscript in generic statements involving price and profit functions, understanding that all usage is with respect to the (currently) applicable prices.\n232 Since we do not assume anything about the buyer``s strategy, the analysis refers to profit and surplus with respect to the face value of the buyer``s report.\nThe functions \u03c0i and \u03c3i refer to the true cost functions of si.\nDEFINITION 10.\nA seller is called Straightforward Bidder (SB) if at each round t she bids on Bt i as follows: if max\u03b8\u2208\u0398 \u03c0t i (\u03b8) < 0, then Bt i = \u2205.\nOtherwise let \u03a9t i \u2286 arg max \u03b8\u2208\u0398 \u03c0t i (\u03b8) Bt i = {\u03b8r | \u03b8 \u2208 \u03a9t i, r \u2208 {1, ... , g}}.\nIntuitively, an SB seller follows a myopic best response strategy (MBR), meaning they bid myopically rather than strategically by optimizing their profit with respect to current prices.\nTo calculate Bt i sellers need to optimize their current profit function, as discussed in Section 4.2.\nThe following lemma bridges the apparent gap between the compact pricing and bid structure and the global optimization performed by the traders.\nLEMMA 8.\nLet \u03a8 be a set of configurations, all maximizing profit for a trader \u03c4 (seller or buyer) at the relevant prices.\nLet \u03a6 = {\u03b8r | \u03b8 \u2208 \u03a8, r \u2208 {1, ... , g}.\nThen any consistent cover in \u03a6 is also a profit-maximizing configuration for \u03c4.\nProof sketch (full proof in the online appendix): A source of an element \u03b8r is a configuration \u02dc\u03b8 \u2208 \u03a8 from which it originated (meaning, \u02dc\u03b8r = \u03b8r).\nStarting from the supposedly suboptimal cover \u03b81 , we build a series of covers \u03b81 , ... , \u03b8L .\nAt each \u03b8j we flip the value of a set of sub-configurations \u03bcj corresponding to a subtree, with the sub-configurations of the configuration \u02c6\u03b8j \u2208 \u03a8 which is the source of the parent \u03b3j of \u03bcj .\nThat ensures that all elements in \u03bcj \u222a {\u03b3j} have a mutual source \u02c6\u03b8j .\nWe show that all \u03b8j are consistent and that they must all be suboptimal as well, and since all elements of \u03b8L have a mutual source, meaning \u03b8L = \u02c6\u03b8L \u2208 \u03a8, it contradicts optimality of \u03a8.\nCOROLLARY 9.\nFor SB seller si, \u2200t, \u2200\u03b8 \u2208 Bt i , \u03c0t i (\u03b8 ) = max \u03b8\u2208\u0398 \u03c0t i (\u03b8).\nNext we consider combinations of configurations that are only within some \u03b4 of optimality.\nLEMMA 10.\nLet \u03a8 be a set of configurations, all are within \u03b4 of maximizing profit for a trader \u03c4 at the prices, and \u03a6 defined as in Lemma 8.\nThen any consistent cover in \u03a6 is within \u03b4g of maximizing utility for \u03c4.\nThis bound is tight, that is for any GAI tree and a non-trivial domain we can construct a set \u03a8 as above in which there exists a consistent cover whose utility is exactly \u03b4g below the maximal.\nNext we formally define Mt .\nFor connected GAI trees, Mt is the set of sub-configurations that are part of a configuration within of optimal.\nWhen the GAI tree is in fact a forest, we apportion the error proportionally across the disconnected trees.\nLet G be comprised of trees G1, ... , Gh.\nWe use \u03b8j to denote the projection of a configuration \u03b8 on the tree Gj , and gj denotes the number of GAI elements in Gj .\nMt j = {\u03b8r | \u03c0t b(\u03b8j) \u2265 max \u03b8j \u2208\u0398j \u03c0t b(\u03b8j ) \u2212 gj g , r \u2208 Gj } Then define Mt = Sh j=1 Mt j. Let ej = gj \u22121 denote the number of edges in Gj .\nWe define the connectivity parameter, e = maxj=1,...,h ej .\nAs shown below, this connectivity parameter is an important factor in the performance of the auction.\nCOROLLARY 11.\n\u2200\u03b8 \u2208 Mt , \u03c0t b(\u03b8 ) \u2265 max \u03b8\u2208\u0398 \u03c0t b(\u03b8) \u2212 (e + 1) In the fully additive case this loss of efficiency reduces to .\nOn the other extreme, if the GAI network is connected then e+1 = g.\nWe also note that without assuming any preference structure, meaning that the CDI map is fully connected, g = 1 and the efficiency loss is again .\nLemmas 12 through 15 show that through the price system, the choice of buyer preferred configurations, and price change rules, Phase A leads the buyer and each of the sellers to their mutually efficient configuration.\nLEMMA 12.\nmax\u03b8\u2208\u0398 \u03c0t b(\u03b8) does not change in any round t of phase A. PROOF.\nWe prove the lemma per each tree Gj.\nThe optimal values for disconnected components are independent of each other hence if the maximal profit for each component does not change the combined maximal profit does not change as well.\nIf the price of \u03b8j was reduced during phase A, that is pt+1 (\u03b8j) = pt (\u03b8j ) \u2212 \u03b4, it must be the case that some w \u2264 gj sub-configurations of \u03b8j are not in Mt j, and \u03b4 = w g .\nThe definition of Mt j ensures \u03c0t b(\u03b8j ) < max \u03b8\u2208\u0398 \u03c0t b(\u03b8j) \u2212 gj g .\nTherefore, \u03c0t+1 b (\u03b8 ) = \u03c0t (\u03b8 ) + \u03b4 = \u03c0t (\u03b8 ) + w g \u2264 max \u03b8\u2208\u0398 \u03c0t b(\u03b8j).\nThis is true for any configuration whose profit improves, therefore the maximal buyer profit does not change during phase A. LEMMA 13.\nThe price of at least one sub-configuration must be reduced at every round in phase A. PROOF.\nIn each round t < T of phase A there exists an active seller i for whom Bt i \u2229 Mt = \u2205.\nHowever to be active in round t, Bt i = \u2205.\nLet \u02c6\u03b8 \u2208 Bt i .\nIf \u2200r.\u02c6\u03b8r \u2208 Mt , then \u02c6\u03b8 \u2208 Mt by definition of Mt .\nTherefore there must be \u02c6\u03b8r \u2208 Mt .\nWe need to prove that for at least one of these sub-configurations, \u03c0t b(\u02c6\u03b8r) < 0 to ensure activation of rule [A].\nAssume for contradiction that for any \u02c6\u03b8r \u2208 \u00afMt , \u03c0t b(\u02c6\u03b8r) \u2265 0.\nFor simplicity we assume that for any \u03b8r, \u03c01 b (\u03b8r) is some product of g (that can be easily done), and that ensures that \u03c0t b(\u02c6\u03b8r) = 0 because once profit hits 0 it cannot increase by rule [A].\nIf \u02c6\u03b8r \u2208 \u00afMt , \u2200r = 1, ... , g then \u03c0t b(\u02c6\u03b8) = 0.\nThis contradicts Lemma 12 since we set high initial prices.\nTherefore some of the sub-configurations of \u02c6\u03b8 are in Mt , and WLOG we assume it is \u02c6\u03b81, ... , \u02c6\u03b8k.\nTo be in Mt these k sub-configurations must have been in some preferred full configuration, meaning there exists \u03b8 \u2208 Mt such that \u03b8 = (\u02c6\u03b81, ... , \u02c6\u03b8k, \u03b8k+1, ... , \u03b8g) Since \u02c6\u03b8 /\u2208 Mt It must be that case that \u03c0t b(\u02c6\u03b8) < \u03c0t b(\u03b8 ).\nTherefore \u03c0t b(\u03b8k+1, ... , \u03b8g) > \u03c0t b(\u02c6\u03b8k+1, ... , \u02c6\u03b8g) = 0 Hence for at least one r \u2208 {k + 1, ... , g}, \u03c0t b(\u03b8r) > 0 contradicting rule [A].\n233 LEMMA 14.\nWhen the solution to MAP provides positive surplus, and at least the best seller is SB, the auction must reach phase B. PROOF.\nBy Lemma 13 prices must go down in every round of phase A. Rule [A] sets a lower bound on all prices therefore the auction either terminates in phase A or must reach condition [SWITCH].\nWe set the initial prices are high such that max\u03b8\u2208\u0398 \u03c01 b (\u03b8) < 0, and by Lemma 12 max\u03b8\u2208\u0398 \u03c0t b(\u03b8) < 0 during phase A.\nWe assume that the efficient allocation (\u03b8\u2217 , i\u2217 ) provides positive welfare, that is \u03c3i\u2217 (\u03b8\u2217 ) = \u03c0t b(\u03b8\u2217 ) + \u03c0t i\u2217 (\u03b8\u2217 ) > 0.\nsi\u2217 is SB therefore she will leave the auction only when \u03c0t i\u2217 (\u03b8\u2217 ) < 0.\nThis can happen only when \u03c0t b(\u03b8\u2217 ) > 0, therefore si\u2217 does not drop in phase A hence the auction cannot terminate before reaching condition [SWITCH].\nLEMMA 15.\nFor SB seller si, \u03b7i is (e + 1) -efficient.\nPROOF.\n\u03b7i is chosen to maximize the buyer``s surplus out of Bt i at the end of phase A.\nSince Bt i \u2229 Mt = \u2205, clearly \u03b7i \u2208 Mt .\nFrom Corollary 11 and Corollary 9, for any \u02dc\u03b8, \u03c0T b (\u03b7i) \u2265 \u03c0T b (\u02dc\u03b8) \u2212 (e + 1) \u03c0T i (\u03b7i) \u2265 \u03c0T i (\u02dc\u03b8) \u21d2 \u03c3i(\u03b7i) \u2265 \u03c3i(\u02dc\u03b8) \u2212 (e + 1) This establishes the approximate bilateral efficiency of the results of Phase A (at this point under the assumption of SB).\nBased on Phase B``s simple role as a single-dimensional bidding competition over the discount, we next assert that the overall result is efficient under SB, which in turn proves to be an approximately ex-post equilibrium strategy in the two phases.\nLEMMA 16.\nIf sellers si and sj are SB, and si is active at least as long as sj is active in phase B, then \u03c3i(\u03b7i) \u2265 max \u03b8\u2208\u0398 \u03c3j(\u03b8) \u2212 (e + 2) .\nTHEOREM 17.\nGiven a truthful buyer and SB sellers, the auction is (e+2) -efficient: the surplus of the final allocation is within (e + 2) of the maximal surplus.\nFollowing PK, we rely on an equivalence to the one-sided VCG auction to establish incentive properties for the sellers.\nIn the onesided multiattribute VCG auction, buyer and sellers report valuation and cost functions \u02c6ub, \u02c6ci, and the buyer pays the sell-side VCG payment to the winning seller.\nDEFINITION 11.\nLet (\u03b8\u2217 , i\u2217 ) be the optimal solution to MAP.\nLet (\u02dc\u03b8,\u02dci) be the best solution to MAP when i\u2217 does not participate.\nThe sell-side VCG payment is V CG(\u02c6ub, \u02c6ci) = \u02c6ub(\u03b8\u2217 ) \u2212 (\u02c6ub(\u02dc\u03b8) \u2212 \u02c6c\u02dci(\u02dc\u03b8)).\nIt is well-known that truthful bidding is a dominant strategy for sellers in the one-sided VCG auction.\nIt is also shown by PK that the maximal regret for buyers from bidding truthfully in this mechanism is ub(\u03b8\u2217 ) \u2212 ci\u2217 (\u03b8\u2217 ) \u2212 (ub(\u02dc\u03b8) \u2212 \u02c6c\u02dci(\u02dc\u03b8)), that is, the marginal product of the efficient seller.\nUsually in iterative auctions the VCG outcome is only nearly achieved, but the deviation is bounded by the minimal price change.\nWe show a similar result, and therefore define \u03b4-VCG payments.\nDEFINITION 12.\nSell-side \u03b4-VCG payment for MAP is a payment p such that V CG(\u02c6ub, \u02c6ci) \u2212 \u03b4 \u2264 p \u2264 V CG(\u02c6ub, \u02c6ci) + \u03b4.\nWhen payment is guaranteed to be \u03b4-VCG sellers can only affect their payment within that range, therefore their gain by falsely reporting their cost is bounded by 2\u03b4.\nLEMMA 18.\nWhen sellers are SB, the payment in the end of GAI auction is sell-side (e + 2) -VCG.\nTHEOREM 19.\nSB is an (3e + 5) ex-post Nash Equilibrium for sellers in GAI auction.\nThat is, sellers cannot gain more than (3e + 5) by deviating.\nIn practice, however, sellers are unlikely to have the information that would let them exploit that potential gain.\nThey are much more likely to lose from bidding on their less attractive configurations.\n5.5 Computation and Complexity The size of the price space maintained in the auction is equal to the total number of sub-configurations, meaning it is exponential in maxr |Ir|.\nThis is also equivalent to the tree-width (plus one) of the original CDI-map.\nFor the purpose of the computational analysis let dj denote the domain of attribute aj, and I = Sg r=1 Q j\u2208Ir dj, the collection of all sub-configurations.\nThe first purpose of this sub-section is to show that the complexity of all the computations required for the auction depends only on |I|, i.e., no computation depends on the size of the full exponential domain.\nWe are first concerned with the computation of Mt .\nSince Mt grows monotonically with t, a naive application of optimization algorithm to generate the best outcomes sequentially might end up enumerating significant portions of the fully exponential domain.\nHowever as shown below this plain enumeration can be avoided.\nPROPOSITION 20.\nThe computation of Mt can be done in time O(|I|2 ).\nMoreover, the total time spent on this task throughout the auction is O(|I|(|I| + T)).\nThe bounds are in practice significantly lower, based on results on similar problems from the probabilistic reasoning literature [18].\nOne of the benefits of the compact pricing structure is the compact representation it lends for bids: sellers submit only sub-bids, and therefore the number of them submitted and stored per seller is bounded by |I|.\nSince the computation tasks: Bt i = \u2205, rule [SWITCH] and choice of \u03b7i are all involving the set Bt i , it is important to note that their performance only depend on the size of the set Bt i , since they are all subsumed by the combinatorial optimization task over Bt i or Bt i \u2229 Mt .\nNext, we analyze the number of rounds it takes for the auction to terminate.\nPhase B requires maxi=1,...n \u03c0T i (\u03b7i)1 .\nSince this is equivalent to price-only auctions, the concern is only with the time complexity of phase A.\nSince prices cannot go below fb,r(\u03b8r), an upper bound on the number of rounds required is T \u2264 X \u03b8r\u2208I (p1 (\u03b8r) \u2212 fb,r(\u03b8r)) g However phase A may converge faster.\nLet the initial negative profit chosen by the auctioneer be m = max\u03b8\u2208\u0398 \u03c01 b (\u03b8).\nIn the worst case phase A needs to run until \u2200\u03b8 \u2208 \u0398.\u03c0b(\u03b8) = m.\nThis happens for example when \u2200\u03b8r \u2208 I.pt (\u03b8r) = fb,r(\u03b8r) + m g .\nIn general, the closer the initial prices reflect buyer valuation, the faster phase A converges.\nOne extreme is to choose p1 (\u03b8r) = 234 I1 I2 a1 b1 a2 b1 a1 b2 a2 b2 b1 c1 b2 c1 b1 c2 b2 c2 fb 65 50 55 70 50 85 60 75 f1 35 20 30 70 65 65 70 61 f2 35 20 25 25 55 110 70 95 Table 1: GAI utility functions for the example domain.\nfb represents the buyer``s valuation, and f1 and f2 costs of the sellers s1 and s2.\nfb,r(\u03b8r) + m g .\nThat would make phase A redundant, at the cost of full initial revelation of buyer``s valuation as done in other mechanisms discussed below.\nBetween this option and the other extreme, which is \u2200\u03b1, \u02c6\u03b1 \u2208 I, p1 (\u03b1) = p1 (\u02c6\u03b1) the auctioneer has a range of choices to determine the right tradeoff between convergence time and information revelation.\nIn the example below the choice of a lower initial price for the domain of I1 provides some speedup by revealing a harmless amount of information.\nAnother potential concern is the communication cost associated with the Japanese auction style.\nThe sellers need to send their bids over and over again at each round.\nA simple change can be made to avoid much of the redundant communication: the auction can retain sub-bids from previous rounds on sub-configurations whose price did not change.\nSince combinations of sub-bids from different rounds can yield sub-optimal configurations, each sub-bid should be tagged with the number of the latest round in which it was submitted, and only consistent combinations from the same round are considered to be full bids.\nWith this implementation sellers need not resubmit their bid until a price of at least one sub-configuration has changed.\n5.6 Example We use the example settings introduced in Section 5.2.\nRecall that the GAI structure is I1 = {a, b}, I2 = {b, c} (note that e = 1).\nTable 1 shows the GAI utilities for the buyer and the two sellers s1, s2.\nThe efficient allocation is (s1, a1 b2 c1 ) with a surplus of 45.\nThe maximal surplus of the second best seller, s2, is 25, achieved by a1 b1 c1 , a2 b1 c1 , and a2 b2 c2 .\nWe set all initial prices over I1 to 75, and all initial prices over I2 to 90.\nWe set = 8, meaning that price reduction for sub-configurations is 4.\nThough with these numbers it is not guaranteed by Theorem 17, we expect s1 to win on either the efficient allocation or on a1 b2 c2 which provides a surplus of 39.\nThe reason is that these are the only two configurations which are within (e + 1) = 16 of being efficient for s1 (therefore one of them must be chosen by Phase A), and both provide more than surplus over s2``s most efficient configuration (and this is sufficient in order to win in Phase B).\nTable 2 shows the progress of phase A. Initially all configuration have the same cost (165), so sellers bid on their lowest cost configuration which is a2 b1 c1 for both (with profit 80 to s1 and 90 to s2), and that translates to sub-bids on a2 b1 and b1 c1 .\nM1 contains the sub-configurations a2 b2 and b2 c1 of the highest value configuration a2 b2 c1 .\nPrice is therefore decreased on a2 b1 and b1 c1 .\nAfter the price change, s1 has higher profit (74) on a1 b2 c2 and she therefore bids on a1 b2 and b2 c2 .\nNow (round 2) their prices go down, reducing the profit on a1 b2 c2 to 66 and therefore in round 3 s1 prefers a2 b1 c2 (profit 67).\nAfter the next price change the configurations a1 b2 c1 and a1 b2 c2 both become optimal (profit 66), and the subbids a1 b2 , b2 c1 and b2 c2 capture the two.\nThese configurations stay optimal for another round (5), with profit 62.\nAt this point s1 has a full bid (in fact two full bids: a1 b2 c2 and a1 b2 c1 ) in M5 , and I1 I2 t a1b1 a2b1 a1b2 a2b2 b1c1 b2c1 b1c2 b2c2 1 75 75 75 75 90 90 90 90 s1, s2 \u2217 s1, s2 \u2217 2 75 71 75 75 86 90 90 90 s2 s1 \u2217 s2 \u2217 s1 3 75 67 71 75 82 90 90 86 s1, s2 \u2217 s2 \u2217 s1 \u2217 4 75 63 71 75 78 90 86 86 s2 s1 \u2217 s2 \u2217, s1 \u2217, s1 5 75 59 67 75 74 90 86 86 s2 \u2217, s1 \u2217 s2 \u2217, s1 \u2217, s1 6 71 59 67 75 70 90 86 86 s2 \u2217, s1 \u2217 \u2217, s1 s2 \u2217, s1 7 71 55 67 75 70 90 82 86 s2 \u2217, s1 \u2217 s2 \u2217, s1 \u2217, s1 8 67 55 67 75 66 90 82 86 \u2217 s2 \u2217, s1 \u2217 \u2217 \u2217, s1 s2 \u2217, s1 9 67 51 67 75 66 90 78 86 \u2217, s2 \u2217, s1 \u2217 \u2217, s2 \u2217, s1 \u2217, s1 Table 2: Auction progression in phase A. Sell bids and designation of Mt (using \u2217) are shown below the price of each subconfiguration.\ntherefore she no longer changes her bids since the price of her optimal configurations does not decrease.\ns2 sticks to a2 b1 c1 during the first four rounds, switching to a1 b1 c1 in round 5.\nIt takes four more rounds for s2 and Mt to converge (M10 \u2229B10 2 = {a1 b1 c1 }).\nAfter round 9 the auction sets \u03b71 = a1 b2 c1 (which yields more buyer profit than a1 b2 c2 ) and \u03b72 = a1 b1 c1 .\nFor the next round (10) \u0394 = 8, increased by 8 for each subsequent round.\nNote that p9 (a1 b1 c1 ) = 133, and c2(a1 b1 c1 ) = 90, therefore \u03c0T 2 (\u03b72) = 43.\nIn round 15, \u0394 = 48 meaning p15 (a1 b1 c1 ) = 85 and that causes s2 to drop out, setting the final allocation to (s1, a1 b2 c1 ) and p15 (a1 b2 c1 ) = 157 \u2212 48 = 109.\nThat leaves the buyer with a profit of 31 and s1 with a profit of 14, less than below the VCG profit 20.\nThe welfare achieved in this case is optimal.\nTo illustrate how some efficiency loss could occur consider the case that c1(b2 c2 ) = 60.\nIn that case, in round 3 the configuration a1 b2 c2 provides the same profit (67) as a2 b1 c2 , and s1 bids on both.\nWhile a2 b1 c2 is no longer optimal after the price change, a1 b2 c2 remains optimal on subsequent rounds because b2 c2 \u2208 Mt , and the price change of a1 b2 affects both a1 b2 c2 and the efficient configuration a1 b2 c1 .\nWhen phase A ends B10 1 \u2229 M10 = {a1 b2 c2 } so the auction terminates with the slightly suboptimal configuration and surplus 40.\n6.\nDISCUSSION 6.1 Preferential Assumptions A key aspect in implementing GAI based auctions is the choice of the preference structure, that is, the elements {I1, ... , Ig}.\nIn some domains the structure can be more or less robust over time and over different decision makers.\nWhen this is not the case, extracting reliable structure from sellers (in the form of CDI conditions) is a serious challenge.\nThis could have been a deal breaker for such domains, but in fact it can be overcome.\nIt turns out that we can run this auction without any assumptions on sellers'' preference structure.\nThe only place where this assumption is used in our analysis is for Lemma 8.\nIf sellers whose preference structure does not agree with the one used by the auction are guided to submit only one full bid at each round, or a set of bids that does not yield undesired consistent combinations, all the properties of the auction 235 still hold.\nLocally, the sellers can optimize their profit functions using the union of their GAI structure with the auction``s structure.\nIt is therefore essential only that the buyer``s preference structure is accurately modeled.\nOf course, capturing sellers'' structures as well is still preferred since it can speed up the execution and let sellers take advantage of the compact bid representation.\nIn both cases the choice of clusters may significantly affect the complexity of the price structure and the runtime of the auction.\nIt is sometimes better to ignore some weaker interdependencies in order to reduce dimensionality.\nThe complexity of the structure also affects the efficiency of the auction through the value of e. 6.2 Information Revelation Properties In considering information properties of this mechanism we compare to the standard approach for iterative multiattribute auctions, which is based on the theoretical foundations of Che [7].\nIn most of these mechanisms the buyer reveals a scoring function and then the mechanism solicits bids from the sellers [3, 22, 8, 21] (the mechanisms suggested by Beil and Wein [2] is different since buyers can modify their scoring function each round, but the goal there is to maximize the buyer``s profit).\nWhereas these iterative procurement mechanisms tend to relieve the burden of information revelation from the sellers, a major drawback is that the buyer``s utility function must be revealed to the sellers before receiving any commitment.\nIn the mechanisms suggested by PK and in our GAI auction above, buyer information is revealed only in exchange for sell commitments.\nIn particular, sellers learn nothing (beyond the initial price upper bound, which can be arbitrarily loose) about the utility of configurations for which no bid was submitted.\nWhen bids are submitted for a configuration \u03b8, sellers would be able to infer its utility relative to the current preferred configurations only after the price of \u03b8 is driven down sufficiently to make it a preferred configuration as well.\n6.3 Conclusions We propose a novel exploitation of preference structure in multiattribute auctions.\nRather than assuming full additivity, or no structure at all, we model preferences using the GAI decomposition.\nWe developed an iterative auction mechanism directly relying on the decomposition, and also provided direct means of constructing the representation from relatively simple statements of willingnessto-pay.\nOur auction mechanism generalizes PK``s preference modeling, while in essence retaining their information revelation properties.\nIt allows for a range of tradeoffs between accuracy of preference representation and both the complexity of the pricing structure and efficiency of the auction, as well as tradeoffs between buyer``s information revelation and the time required for convergence.\n7.\nACKNOWLEDGMENTS This work was supported in part by NSF grants IIS-0205435 and IIS-0414710, and the STIET program under NSF IGERT grant 0114368.\nWe are grateful to comments from anonymous reviewers.\n8.\nREFERENCES [1] F. Bacchus and A. Grove.\nGraphical models for preference and utility.\nIn Eleventh Conference on Uncertainty in Artificial Intelligence, pages 3-10, Montreal, 1995.\n[2] D. R. Beil and L. M. Wein.\nAn inverse-optimization-based auction for multiattribute RFQs.\nManagement Science, 49:1529-1545, 2003.\n[3] M. Bichler.\nThe Future of e-Markets: Multi-Dimensional Market Mechanisms.\nCambridge University Press, 2001.\n[4] C. Boutilier, F. Bacchus, and R. I. Brafman.\nUCP-networks: A directed graphical representation of conditional utilities.\nIn Seventeenth Conference on Uncertainty in Artificial Intelligence, pages 56-64, Seattle, 2001.\n[5] R. I. Brafman, C. Domshlak, and T. Kogan.\nCompact value-function representations for qualitative preferences.\nIn Twentieth Conference on Uncertainty in Artificial Intelligence, pages 51-59, Banff, 2004.\n[6] D. Braziunas and C. Boutilier.\nLocal utility elicitation in GAI models.\nIn Twenty-first Conference on Uncertainty in Artificial Intelligence, pages 42-49, Edinburgh, 2005.\n[7] Y.-K.\nChe.\nDesign competition through multidimensional auctions.\nRAND Journal of Economics, 24(4):668-680, 1993.\n[8] E. David, R. Azoulay-Schwartz, and S. Kraus.\nAn English auction protocol for multi-attribute items.\nIn Agent Mediated Electronic Commerce IV: Designing Mechanisms and Systems, volume 2531 of Lecture Notes in Artificial Intelligence, pages 52-68.\nSpringer, 2002.\n[9] G. Debreu.\nTopological methods in cardinal utility theory.\nIn K. Arrow, S. Karlin, and P. Suppes, editors, Mathematical Methods in the Social Sciences.\nStanford Univ..\nPress, 1959.\n[10] J. S. Dyer and R. K. Sarin.\nAn axiomatization of cardinal additive conjoint measurement theory.\nWorking Paper 265, WMSI, UCLA, February 1977.\n[11] J. S. Dyer and R. K. Sarin.\nMeasurable multiattribute value functions.\nOperations Research, 27:810-822, 1979.\n[12] Y. Engel, M. P. Wellman, and K. M. Lochner.\nBid expressiveness and clearing algorithms in multiattribute double auctions.\nIn Seventh ACM Conference on Electronic Commerce, pages 110-119, Ann Arbor, MI, 2006.\n[13] P. C. Fishburn.\nInterdependence and additivity in multivariate, unidimensional expected utility theory.\nIntl..\nEconomic Review, 8:335-342, 1967.\n[14] C. Gonzales and P. Perny.\nGAI networks for utility elicitation.\nIn Ninth Intl..\nConf.\non the Principles of Knowledge Representation and Reasoning, pages 224-234, Whistler, BC, 2004.\n[15] C. Gonzales and P. Perny.\nGAI networks for decision making under certainty.\nIn IJCAI-05 Workshop on Advances in Preference Handling, Edinburgh, 2005.\n[16] N. Hyafil and C. Boutilier.\nRegret-based incremental partial revelation mechanisms.\nIn Twenty-first National Conference on Artificial Intelligence, pages 672-678, Boston, MA, 2006.\n[17] R. L. Keeney and H. Raiffa.\nDecisions with Multiple Objectives: Preferences and Value Tradeoffs.\nWiley, 1976.\n[18] D. Nilsson.\nAn efficient algorithm for finding the M most probable configurations in probabilistic expert systems.\nStatistics and Computinge, 8(2):159-173, 1998.\n[19] D. C. Parkes and J. Kalagnanam.\nModels for iterative multiattribute procurement auctions.\nManagement Science, 51:435-451, 2005.\n[20] J. Pearl and A. Paz.\nGraphoids: A graph based logic for reasoning about relevance relations.\nIn B. Du Boulay, editor, Advances in Artificial Intelligence II.\n1989.\n[21] J. Shachat and J. T. Swarthout.\nProcurement auctions for differentiated goods.\nIBM Research Report RC22587, IBM T.J. Watson Research Laboratory, 2002.\n[22] N. Vulkan and N. R. Jennings.\nEfficient mechanisms for the supply of services in multi-agent environments.\nDecision Support Systems, 28:5-19, 2000.\n236", "lvl-3": "Generalized Value Decomposition and Structured Multiattribute Auctions\nABSTRACT\nMultiattribute auction mechanisms generally either remain agnostic about traders ' preferences , or presume highly restrictive forms , such as full additivity .\nReal preferences often exhibit dependencies among attributes , yet may possess some structure that can be usefully exploited to streamline communication and simplify operation of a multiattribute auction .\nWe develop such a structure using the theory of measurable value functions , a cardinal utility representation based on an underlying order over preference differences .\nA set of local conditional independence relations over such differences supports a generalized additive preference representation , which decomposes utility across overlapping clusters of related attributes .\nWe introduce an iterative auction mechanism that maintains prices on local clusters of attributes rather than the full space of joint configurations .\nWhen traders ' preferences are consistent with the auction 's generalized additive structure , the mechanism produces approximately optimal allocations , at approximate VCG prices .\n1 .\nINTRODUCTION\nMultiattribute trading mechanisms extend traditional , price-only mechanisms by facilitating the negotiation over a set of predefined attributes representing various non-price aspects of the deal .\nRather than negotiating over a fully defined good or service , a multiattribute mechanism delays commitment to specific configurations until the most promising candidates are identified .\nFor example , a procurement department of a company may use a multiattribute auction to select a supplier of hard drives .\nSupplier offers may be evaluated not only over the price they offer , but also over various qualitative attributes such as volume , RPM , access time , latency , transfer rate , and so on .\nIn addition , suppliers may offer different contract conditions such as warranty , delivery time , and service .\nIn order to account for traders ' preferences , the auction mechanism must extract evaluative information over a complex domain of multidimensional configurations .\nConstructing and communicating a complete preference specification can be a severe burden for even a moderate number of attributes , therefore practical multiattribute auctions must either accommodate partial specifications , or support compact expression of preferences assuming some simplified form .\nBy far the most popular multiattribute form to adopt is the simplest : an additive representation where overall value is a linear combination of values associated with each attribute .\nFor example , several recent proposals for iterative multiattribute auctions [ 2 , 3 , 8 , 19 ] require additive preference representations .\nSuch additivity reduces the complexity of preference specification exponentially ( compared to the general discrete case ) , but precludes expression of any interdependencies among the attributes .\nIn practice , however , interdependencies among natural attributes are quite common .\nFor example , the buyer may exhibit complementary preferences for size and access time ( since the performance effect is more salient if much data is involved ) , or may view a strong warranty as a good substitute for high reliability ratings .\nSimilarly , the seller 's production characteristics ( such as `` increasing access time is harder for larger hard drives '' ) can easily violate additivity .\nIn such cases an additive value function may not be able to provide even a reasonable approximation of real preferences .\nOn the other hand , fully general models are intractable , and it is reasonable to expect multiattribute preferences to exhibit some structure .\nOur goal , therefore , is to identify the subtler yet more widely applicable structured representations , and exploit these properties of preferences in trading mechanisms .\nWe propose an iterative auction mechanism based on just such a flexible preference structure .\nOur approach is inspired by the design of an iterative multiattribute procurement auction for additive preferences , due to Parkes and Kalagnanam ( PK ) [ 19 ] .\nPK propose two types of iterative auctions : the first ( NLD ) makes no assumptions about traders ' preferences , and lets sellers bid on the full multidimensional attribute space .\nBecause NLD maintains an exponential price structure , it is suitable only for small domains .\nThe other auction ( AD ) assumes additive buyer valuation and seller cost functions .\nIt collects sell bids per attribute level and for a single discount term .\nThe price of a configuration is defined as the sum of the prices of the chosen attribute levels minus the discount .\nThe auction we propose also supports compact price spaces , albeit for levels of clusters of attributes rather than singletons .\nWe employ a preference decomposition based on generalized additive independence ( GAI ) , a model flexible enough to accommodate interdependencies to the exact degree of accuracy desired , yet providing a compact functional form to the extent that interdependence can be limited .\nGiven its roots in multiattribute utility theory [ 13 ] ,\nthe GAI condition is defined with respect to the expected utility function .\nTo apply it for modeling values for certain outcomes , therefore , requires a reinterpretation for preference under certainty .\nTo this end , we exploit the fact that auction outcomes are associated with continuous prices , which provide a natural scale for assessing magnitude of preference .\nWe first lay out a representation framework for preferences that captures , in addition to simple orderings among attribute configuration values , the difference in the willingness to pay ( wtp ) for each .\nThat is , we should be able not only to compare outcomes but also decide whether the difference in quality is worth a given difference in price .\nNext , we build a direct , formally justified link from preference statements over priced outcomes to a generalized additive decomposition of the wtp function .\nAfter laying out this infrastructure , we employ this representation tool for the development of a multiattribute iterative auction mechanism that allows traders to express their complex preferences in GAI format .\nWe then study the auction 's allocational , computational , and practical properties .\nIn Section 2 we present essential background on our representation framework , the measurable value function ( MVF ) .\nSection 3 develops new multiattribute structures for MVF , supporting generalized additive decompositions .\nNext , we show the applicability of the theoretical framework to preferences in trading .\nThe rest of the paper is devoted to the proposed auction mechanism .\n2 .\nMULTIATTRIBUTE PREFERENCES\n2.1 Preferential Independence\n2.2 Measurable Value Functions\n3 .\nADVANCED MVF STRUCTURES\n3.1 Conditional Difference Independence\n3.2 GAI Structure for MVF\n4 .\nWILLINGNESS-TO-PAY AS AN MVF\n4.1 Construction\n4.2 Optimization\n5 .\nGAI IN MULTIATTRIBUTE AUCTIONS\n5.1 The Multiattribute Procurement Problem\n5.2 GAI Trees\n5.3 The GAI Auction\n5.4 Economic Analysis\n5.5 Computation and Complexity\nQ\n5.6 Example", "lvl-4": "Generalized Value Decomposition and Structured Multiattribute Auctions\nABSTRACT\nMultiattribute auction mechanisms generally either remain agnostic about traders ' preferences , or presume highly restrictive forms , such as full additivity .\nReal preferences often exhibit dependencies among attributes , yet may possess some structure that can be usefully exploited to streamline communication and simplify operation of a multiattribute auction .\nWe develop such a structure using the theory of measurable value functions , a cardinal utility representation based on an underlying order over preference differences .\nA set of local conditional independence relations over such differences supports a generalized additive preference representation , which decomposes utility across overlapping clusters of related attributes .\nWe introduce an iterative auction mechanism that maintains prices on local clusters of attributes rather than the full space of joint configurations .\nWhen traders ' preferences are consistent with the auction 's generalized additive structure , the mechanism produces approximately optimal allocations , at approximate VCG prices .\n1 .\nINTRODUCTION\nMultiattribute trading mechanisms extend traditional , price-only mechanisms by facilitating the negotiation over a set of predefined attributes representing various non-price aspects of the deal .\nRather than negotiating over a fully defined good or service , a multiattribute mechanism delays commitment to specific configurations until the most promising candidates are identified .\nFor example , a procurement department of a company may use a multiattribute auction to select a supplier of hard drives .\nIn order to account for traders ' preferences , the auction mechanism must extract evaluative information over a complex domain of multidimensional configurations .\nConstructing and communicating a complete preference specification can be a severe burden for even a moderate number of attributes , therefore practical multiattribute auctions must either accommodate partial specifications , or support compact expression of preferences assuming some simplified form .\nBy far the most popular multiattribute form to adopt is the simplest : an additive representation where overall value is a linear combination of values associated with each attribute .\nFor example , several recent proposals for iterative multiattribute auctions [ 2 , 3 , 8 , 19 ] require additive preference representations .\nSuch additivity reduces the complexity of preference specification exponentially ( compared to the general discrete case ) , but precludes expression of any interdependencies among the attributes .\nIn practice , however , interdependencies among natural attributes are quite common .\nIn such cases an additive value function may not be able to provide even a reasonable approximation of real preferences .\nOn the other hand , fully general models are intractable , and it is reasonable to expect multiattribute preferences to exhibit some structure .\nOur goal , therefore , is to identify the subtler yet more widely applicable structured representations , and exploit these properties of preferences in trading mechanisms .\nWe propose an iterative auction mechanism based on just such a flexible preference structure .\nOur approach is inspired by the design of an iterative multiattribute procurement auction for additive preferences , due to Parkes and Kalagnanam ( PK ) [ 19 ] .\nPK propose two types of iterative auctions : the first ( NLD ) makes no assumptions about traders ' preferences , and lets sellers bid on the full multidimensional attribute space .\nBecause NLD maintains an exponential price structure , it is suitable only for small domains .\nThe other auction ( AD ) assumes additive buyer valuation and seller cost functions .\nIt collects sell bids per attribute level and for a single discount term .\nThe price of a configuration is defined as the sum of the prices of the chosen attribute levels minus the discount .\nThe auction we propose also supports compact price spaces , albeit for levels of clusters of attributes rather than singletons .\nGiven its roots in multiattribute utility theory [ 13 ] ,\nthe GAI condition is defined with respect to the expected utility function .\nTo apply it for modeling values for certain outcomes , therefore , requires a reinterpretation for preference under certainty .\nTo this end , we exploit the fact that auction outcomes are associated with continuous prices , which provide a natural scale for assessing magnitude of preference .\nWe first lay out a representation framework for preferences that captures , in addition to simple orderings among attribute configuration values , the difference in the willingness to pay ( wtp ) for each .\nNext , we build a direct , formally justified link from preference statements over priced outcomes to a generalized additive decomposition of the wtp function .\nAfter laying out this infrastructure , we employ this representation tool for the development of a multiattribute iterative auction mechanism that allows traders to express their complex preferences in GAI format .\nWe then study the auction 's allocational , computational , and practical properties .\nIn Section 2 we present essential background on our representation framework , the measurable value function ( MVF ) .\nSection 3 develops new multiattribute structures for MVF , supporting generalized additive decompositions .\nNext , we show the applicability of the theoretical framework to preferences in trading .\nThe rest of the paper is devoted to the proposed auction mechanism .", "lvl-2": "Generalized Value Decomposition and Structured Multiattribute Auctions\nABSTRACT\nMultiattribute auction mechanisms generally either remain agnostic about traders ' preferences , or presume highly restrictive forms , such as full additivity .\nReal preferences often exhibit dependencies among attributes , yet may possess some structure that can be usefully exploited to streamline communication and simplify operation of a multiattribute auction .\nWe develop such a structure using the theory of measurable value functions , a cardinal utility representation based on an underlying order over preference differences .\nA set of local conditional independence relations over such differences supports a generalized additive preference representation , which decomposes utility across overlapping clusters of related attributes .\nWe introduce an iterative auction mechanism that maintains prices on local clusters of attributes rather than the full space of joint configurations .\nWhen traders ' preferences are consistent with the auction 's generalized additive structure , the mechanism produces approximately optimal allocations , at approximate VCG prices .\n1 .\nINTRODUCTION\nMultiattribute trading mechanisms extend traditional , price-only mechanisms by facilitating the negotiation over a set of predefined attributes representing various non-price aspects of the deal .\nRather than negotiating over a fully defined good or service , a multiattribute mechanism delays commitment to specific configurations until the most promising candidates are identified .\nFor example , a procurement department of a company may use a multiattribute auction to select a supplier of hard drives .\nSupplier offers may be evaluated not only over the price they offer , but also over various qualitative attributes such as volume , RPM , access time , latency , transfer rate , and so on .\nIn addition , suppliers may offer different contract conditions such as warranty , delivery time , and service .\nIn order to account for traders ' preferences , the auction mechanism must extract evaluative information over a complex domain of multidimensional configurations .\nConstructing and communicating a complete preference specification can be a severe burden for even a moderate number of attributes , therefore practical multiattribute auctions must either accommodate partial specifications , or support compact expression of preferences assuming some simplified form .\nBy far the most popular multiattribute form to adopt is the simplest : an additive representation where overall value is a linear combination of values associated with each attribute .\nFor example , several recent proposals for iterative multiattribute auctions [ 2 , 3 , 8 , 19 ] require additive preference representations .\nSuch additivity reduces the complexity of preference specification exponentially ( compared to the general discrete case ) , but precludes expression of any interdependencies among the attributes .\nIn practice , however , interdependencies among natural attributes are quite common .\nFor example , the buyer may exhibit complementary preferences for size and access time ( since the performance effect is more salient if much data is involved ) , or may view a strong warranty as a good substitute for high reliability ratings .\nSimilarly , the seller 's production characteristics ( such as `` increasing access time is harder for larger hard drives '' ) can easily violate additivity .\nIn such cases an additive value function may not be able to provide even a reasonable approximation of real preferences .\nOn the other hand , fully general models are intractable , and it is reasonable to expect multiattribute preferences to exhibit some structure .\nOur goal , therefore , is to identify the subtler yet more widely applicable structured representations , and exploit these properties of preferences in trading mechanisms .\nWe propose an iterative auction mechanism based on just such a flexible preference structure .\nOur approach is inspired by the design of an iterative multiattribute procurement auction for additive preferences , due to Parkes and Kalagnanam ( PK ) [ 19 ] .\nPK propose two types of iterative auctions : the first ( NLD ) makes no assumptions about traders ' preferences , and lets sellers bid on the full multidimensional attribute space .\nBecause NLD maintains an exponential price structure , it is suitable only for small domains .\nThe other auction ( AD ) assumes additive buyer valuation and seller cost functions .\nIt collects sell bids per attribute level and for a single discount term .\nThe price of a configuration is defined as the sum of the prices of the chosen attribute levels minus the discount .\nThe auction we propose also supports compact price spaces , albeit for levels of clusters of attributes rather than singletons .\nWe employ a preference decomposition based on generalized additive independence ( GAI ) , a model flexible enough to accommodate interdependencies to the exact degree of accuracy desired , yet providing a compact functional form to the extent that interdependence can be limited .\nGiven its roots in multiattribute utility theory [ 13 ] ,\nthe GAI condition is defined with respect to the expected utility function .\nTo apply it for modeling values for certain outcomes , therefore , requires a reinterpretation for preference under certainty .\nTo this end , we exploit the fact that auction outcomes are associated with continuous prices , which provide a natural scale for assessing magnitude of preference .\nWe first lay out a representation framework for preferences that captures , in addition to simple orderings among attribute configuration values , the difference in the willingness to pay ( wtp ) for each .\nThat is , we should be able not only to compare outcomes but also decide whether the difference in quality is worth a given difference in price .\nNext , we build a direct , formally justified link from preference statements over priced outcomes to a generalized additive decomposition of the wtp function .\nAfter laying out this infrastructure , we employ this representation tool for the development of a multiattribute iterative auction mechanism that allows traders to express their complex preferences in GAI format .\nWe then study the auction 's allocational , computational , and practical properties .\nIn Section 2 we present essential background on our representation framework , the measurable value function ( MVF ) .\nSection 3 develops new multiattribute structures for MVF , supporting generalized additive decompositions .\nNext , we show the applicability of the theoretical framework to preferences in trading .\nThe rest of the paper is devoted to the proposed auction mechanism .\n2 .\nMULTIATTRIBUTE PREFERENCES\nAs mentioned , most tools facilitating expression of multiattribute value for trading applications assume that agents ' preferences can be represented in an additive form .\nBy way of background , we start by introducing the formal prerequisites justifying the additive representation , as provided by multiattribute utility theory .\nWe then present the generalized additive form , and develop the formal underpinnings for measurable value needed to extend this model to the case of choice under certainty .\n2.1 Preferential Independence\nLet \u0398 denote the space of possible outcomes , with a preference relation ( weak total order ) over \u0398 .\nLet A = { a0 , ... , am } be a set of attributes describing \u0398 .\nCapital letters denote subsets of variables , small letters ( with or without numeric subscripts ) denote specific variables , and X \u00af denotes the complement of X with respect to A .\nWe indicate specific variable assignments with prime signs or superscripts .\nTo represent an instantiation of subsets X , Y at the same time we use a sequence of instantiation symbols , as in X ' Y ' .\nThe preference relation when no uncertainty is modeled is usually represented by a value function v [ 17 ] .\nThe following fundamental result greatly simplifies the value function representation .\nTHEOREM 1 ( [ 9 ] ) .\nA preference order over set of attributes A has an additive value function representation\nEssentially , the additive forms used in trading mechanisms assume mutual preferential independence over the full set of attributes , including the money attribute .\nIntuitively that means that willingness to pay for value of an attribute or attributes can not be affected by the value of other attributes .\nA cardinal value function representing an ordering over certain outcomes need not in general coincide with the cardinal utility function that represents preference over lotteries or expected utility ( EU ) .\nNevertheless , EU functions may possess structural properties analogous to that for value functions , such as additive decomposition .\nSince the present work does not involve decisions under uncertainty , we do not provide a full exposition of the EU concept .\nHowever we do make frequent reference to the following additive independence relations .\nAn ( expected ) utility function u ( \u00b7 ) can be decomposed additively according to its ( possibly overlapping ) GAI sub-configurations .\nWhat is now known as the GAI condition was originally introduced by Fishburn [ 13 ] for EU , and was named GAI and brought to the attention of AI researchers by Bacchus and Grove [ 1 ] .\nGraphical models and elicitation procedures for GAI decomposable utility were developed for EU [ 4 , 14 , 6 ] , for a cardinal representation of the ordinal value function [ 15 ] , and for an ordinal preference relations corresponding to a TCP-net structure by Brafman et al. [ 5 ] .\nApart from the work on GAI in the context of preference handling that were discussed above , GAI have been recently used in the context of mechanism design by Hyafil and Boutilier [ 16 ] , as an aid in direct revelation mechanisms .\nAs shown by Bacchus and Grove [ 1 ] , GAI structure can be identified based on a set of CAI conditions , which are much easier to detect and verify .\nIn general , utility functions may exhibit GAI structure not based on CAI .\nHowever , to date all proposals for reasoning and eliciting utility in GAI form take advantage of the GAI structure primarily to the extent that it represents a collection of CAI conditions .\nFor example , GAI trees [ 14 ] employ triangulation of the CAI map , and Braziunas and Boutilier 's [ 6 ] conditional set Cj of a set Ij corresponds to the CAI separating set of Ij .\nSince the CAI condition is also defined based on preferences over lotteries , we can not apply Bacchus and Grove 's result without first establishing an alternative framework based on priced outcomes .\nWe develop such a framework using the theory of measurable value functions , ultimately producing a GAI decomposition\n( Eq .\n1 ) of the wtp function .\nReaders interested primarily in the multiattribute auction and willing to grant the well-foundedness of the preference structure may skip down to Section 5 .\n2.2 Measurable Value Functions\nTrading decisions represent a special case of decisions under certainty , where choices involve multiattribute outcomes and corresponding monetary payments .\nIn such problems , the key decision often hinges on relative valuations of price differences compared to differences in alternative configurations of goods and services .\nTheoretically , price can be treated as just another attribute , however , such an approach fails to exploit the special character of the money dimension , and can significantly add to complexity due to the inherent continuity and typical wide range of possible monetary outcome values .\nWe build on the fundamental work of Dyer and Sarin [ 10 , 11 ] on measurable value functions ( MVFs ) .\nAs we show below , wtp functions in a quasi-linear setting can be interpreted as MVFs .\nHowever we first present the MVF framework in a more generic way , where the measurement is not necessarily monetary .\nWe present the essential definitions and refer to Dyer and Sarin for more detailed background and axiomatic treatment .\nThe key concept is that of preference difference .\nLet \u03b81 , \u03b82 , \u03d11 , \u03d12 E \u0398 such that \u03b81 ~ \u03b82 and \u03d11 _ \u03d12 .\n[ \u03b82 , \u03b81 ] denotes the preference difference between \u03b82 and \u03b81 , interpreted as the strength , or degree , to which \u03b82 is preferred over \u03b81 .\nLet \u2217 denote a preference order over \u0398 x \u0398 .\nWe interpret the statement\nas `` the preference of \u03d12 over \u03d11 is at least as strong as the preference of \u03b82 over \u03b81 '' .\nWe use the symbol -- \u2217 to represent equality of preference differences .\nNote that an MVF can also be used as a value function representing , since [ \u03b8 ~ , \u03b8 ] \u2217 [ \u03b8 ~ ~ , \u03b8 ] iff \u03b8 ~ \u03b8 ~ ~ .\nDEFINITION 6 ( [ 11 ] ) .\nAttribute set X C A is called difference independent of X \u00af iffor any two assignments X1 \u00af X ~ X2 \u00af X ~ , [ X1 \u00af X ~ , X2 \u00af X ~ ] _ \u2217 [ X1 \u00af X ~ ~ , X2 \u00af X ~ ~ ] for any assignment \u00af X ~ ~ .\nOr , in words , the preference differences on assignments to X given a fixed level of X \u00af do not depend on the particular level chosen for \u00af X .\nAs with additive independence for EU , this condition is stronger than preferential independence of X. Also analogously to EU , mutual preferential independence combined with other conditions leads to additive decomposition of the MVF .\nMoreover , Dyer and Sarin [ 11 ] have defined analogs of utility independence [ 17 ] for MVF , and worked out a parallel set of decomposition results .\n3 .\nADVANCED MVF STRUCTURES\n3.1 Conditional Difference Independence\nOur first step is to generalize Definition 6 to a conditional version .\nSince the conditional set is always the complement , we sometimes leave it implicit , using the abbreviated notation CDI ( X , Y ) .\nCDI leads to a decomposition similar to that obtained from CAI [ 17 ] .\nLEMMA 3 .\nLet u ( A ) be an MVF representing preference differences .\nThen CDI ( X , Y I Z ) iff\nTo complete the analogy with CAI , we generalize Lemma 3 as follows .\nAn immediate result of Proposition 4 is that CDI is a symmetric relation .\nThe conditional independence condition is much more applicable than the unconditional one .\nFor example , if attributes a E X and b E / X are complements or substitutes , X can not be difference independent of \u00af X. However , X \\ { a } may still be CDI of X \u00af given a.\n3.2 GAI Structure for MVF\nA single CDI condition decomposes the value function into two parts .\nWe seek a finer-grain global decomposition of the utility function , similar to that obtained from mutual preferential independence .\nFor this purpose we are now ready to employ the results of Bacchus and Grove [ 1 ] , who establish that the CAI condition has a perfect map [ 20 ] ; that is , there exists a graph whose nodes correspond to the set A , and its node separation reflects exactly the complete set of CAI conditions on A. Moreover , they show that the utility function decomposes over the set of maximal cliques of the perfect map .\nTheir proofs can be easily adapted to CDI , since they only rely on the decomposition property of CAI that is also implied by CDI according to Proposition 4 .\nwhere I1 , ... , Ig are ( overlapping ) subsets of A , each corresponding to a maximal clique of G. Given Theorem 5 , we can now identify an MVF GAI structure from a collection of CDI conditions .\nThe CDI conditions , in turn , are particularly intuitive to detect when the preference differences carry a direct interpretation , as in the case with monetary differences discussed below .\nMoreover , the assumption or detection of CDI conditions can be performed incrementally , until the MVF is decomposed to a reasonable dimension .\nThis is in contrast with the fully additive decomposition of MVF that requires mutual preferential independence [ 11 ] .\nTheorem 5 defines a decomposition structure , but to represent the actual MVF we need to specify the functions over the cliques .\nThe next theorem establishes that the functional constituents of MVF are the same as those for GAI decompositions as defined by Fishburn [ 13 ] for EU .\nWe adopt the following conventional notation .\nLet ( a01 , ... , a0m ) be a predefined vector called the reference outcome .\nFor any I C _ A , the function u ( [ I ] ) stands for the projection of u ( A ) to I where the rest of the attributes are fixed at their reference levels .\nThe proof directly shows that if graph G = ( A , E ) is a perfect map of CDI , u ( A ) decomposes to a sum over the functions defined in ( 4 ) .1 Thus this proof does not rely on the decomposition result of Theorem 5 , only on the existence of the perfect map .\nTo summarize , the results of this section generalize additive MVF theory .\nIn particular it justifies the application of methods recently developed under the EU framework [ 1 , 4 , 14 , 6 ] to representation of value under certainty .\n4 .\nWILLINGNESS-TO-PAY AS AN MVF\n4.1 Construction\nIn this section we apply measurable value to represent differences of willingness to pay for outcomes .\nWe assume that the agent has a preference order over outcome space , represented by a set of attributes A , and an attribute p representing monetary consequence .\nNote that in evaluating a purchase decision , p would correspond to the agent 's money holdings net of the transaction ( i.e. , wealth after purchase ) , not the purchase price .\nAn outcome in this space is represented for example by ( \u03b8 ~ , p ~ ) , where \u03b8 ~ is an instantiation of A and p ~ is a value of p .\nWe further assume that preferences are quasi-linear in p , that is there exists a value function of the form v ( A , p ) = u ( A ) + L ( p ) , where L is a positive linear function .2 The quasi-linear form immediately qualifies money as a measure of preference differences , and establishes a monetary scale for u ( A ) .\nDEFINITION 8 .\nLet v ( A , p ) = u ( A ) + L ( p ) represent , where p is the attribute representing money .\nWe call u ( A ) a willingnessto-pay ( wtp ) function .\nNote that wtp may also refer to the seller 's `` willingness to accept '' function .\nThe wtp u ( A ) is a cardinal function , unique up to a positive linear transformation .\nSince\n( where \u03b81 , \u03b81 E \u0398 , the domain of A ) the wtp function can be used to choose among priced outcomes .\n1This proof and most other proofs in this paper are omitted for space consideration , and are available in an online appendix .\n2In many procurement applications , the deals in question are small relative to the enterprises involved , so the quasi-linearity assumption is warranted .\nThis assumption can be relaxed to a condition called corresponding tradeoffs [ 17 ] , which does not require the value over money to be linear .\nTo simplify the presentation , however , we maintain the stronger assumption .\nNaturally , elicitation of wtp function is most intuitive when using direct monetary values .\nIn other words , we elicit a function in which L ( p ) = p , so v ( A , p ) = u ( A ) + p .\nWe define a reference outcome ( \u03b80 , p0 ) , and assuming continuity of p , for any assignment \u03b8\u02c6 there exists a p\u02c6 such that ( \u02c6\u03b8 , \u02c6p ) \u223c ( \u03b80 , p0 ) .\nAs v is normalized such that v ( \u03b80 , p0 ) = 0 , p\u02c6 is interpreted as the wtp for \u02c6\u03b8 , or the reserve price of \u02c6\u03b8 .\nPROPOSITION 7 .\nThe wtp function is an MVF over differences in the reserve prices .\nWe note that the wtp function is used extensively in economics , and that all the development in Section 3 could be performed directly in terms of wtp , relying on quasi-linearity for preference measurement , and without formalization using MVFs .\nThis formalization however aligns this work with the fundamental difference independence theory by Dyer and Sarin .\nIn addition to facilitating the detection of GAI structure , the CDI condition supports elicitation using local queries , similar to how CAI is used by Braziunas and Boutilier [ 6 ] .\nWe adopt their definition of conditional set of Ir , noted here Sr , as the set of neighbors of attributes in Ir not including the attributes of Ir .\nClearly , Sr is the separating set of Ir in the CDI map , hence CDI ( Ir , Vr ) , where\nEliciting the wtp function therefore amounts to eliciting the utility ( wtp ) of one full outcome ( the reference outcome \u03b80 ) , and then obtaining the function over each maximal clique using monetary differences between its possible assignments ( technique known as pricing out [ 17 ] ) , keeping the variables in the conditional set fixed .\nThese ceteris paribus elicitation queries are local in the sense that the agent does not need to consider the values of the rest of the attributes .\nFurthermore , in eliciting MVFs we can avoid the global scaling step that is required for EU functions .\nSince the preference differences are extracted with respect to specific amounts of the attribute p , the utility is already scaled according to that external measure .\nHence , once the conditional utility functions u ( [ Ij ] ) are obtained , we can calculate u ( A ) according to ( 4 ) .\nThis last step may require ( in the worst case ) computation of a number of terms that is exponential in the number of max cliques .\nIn practice however we do not expect the intersection of the cliques to go that deep ; intersection of more than just a few max cliques would normally be empty .\nTo take advantage of that we can use the search algorithm suggested by Braziunas and Boutilier [ 6 ] , which efficiently finds all the nonempty intersections for each clique .\n4.2 Optimization\nAs shown , the wtp function can be used directly for pairwise comparisons of priced outcomes .\nAnother preference query often treated in the literature is optimization , or choice of best outcome , possibly under constraints .\nTypical decisions about exchange of a good or service exhibit what we call first-order preferential independence ( FOPI ) , under which most or all single attributes have a natural ordering of quality , independent of the values of the rest .3 For example , when choosing a PC we always prefer more memory , faster CPU , longer warranty , and so on .\nUnder FOPI , the unconstrained optimization of 3This should not be mistaken with the highly demanding condition of mutual preferential independence , that requires all tradeoffs between attributes to be independent .\nunpriced outcomes is trivial , hence we consider choice among attribute points with prices .\nSince any outcome can be best given enough monetary compensation , this problem is not well-defined unless the combinations are constrained somehow .\nA particularly interesting optimization problem arises in the context of negotiation , where we consider the utility of both buyers and sellers .\nThe multiattribute matching problem ( MMP ) [ 12 ] is concerned with finding an attribute point that maximizes the surplus of a trade , or the difference between the utilities of the buyer and the seller , ub ( A ) \u2212 us ( A ) .\nGAI , as an additive decomposition , has the property that if ub and us are in GAI form then ub ( A ) \u2212 us ( A ) is in GAI form as well .\nWe can therefore use combinatorial optimization procedures for GAI decomposition , based on the well studied variable elimination schemes ( e.g. , [ 15 ] ) to find the best trading point .\nSimilarly , this optimization can be done to maximize surplus between a trader 's utility function and a pricing system that assigns a price to each level of each GAI element , and this way guide traders to their optimal bidding points .\nIn the rest of the paper we develop a multiattribute procurement auction that builds on this idea .\n5 .\nGAI IN MULTIATTRIBUTE AUCTIONS\n5.1 The Multiattribute Procurement Problem\nIn the procurement setting a single buyer wishes to procure a single good , in some configuration 0 \u2208 \u0398 from one of the candidate sellers s1 , ... , sn .\nThe buyer has some private valuation function ( wtp ) ub : \u0398 \u2192 R , and similarly each seller si has a private valuation function ( willingness-to-accept ) .\nFor compliance with the procurement literature we refer to seller si 's valuation as a cost function , denoted by ci .\nThe multiattribute allocation problem ( MAP ) [ 19 ] is the welfare optimization problem in procurement over a discrete domain , and it is defined as : i \u2217 , 0 \u2217 = arg max ( ub ( 0 ) \u2212 ci ( 0 ) ) .\n( 5 ) i , \u03b8 To illustrate the need for a GAI price space we consider the case of traders with non-additive preferences bidding in an additive price space such as in PK 's auction AD .\nIf the buyer 's preferences are not additive , choosing preferred levels per attribute ( as in auction AD ) admits undesired combinations and fails to guide the sellers to the efficient configurations .\nNon-additive sellers face an exposure problem , somewhat analogous to traders with complementary preferences that participate in simultaneous auctions .\nA value a1 for attribute a may be optimal given that the value of another attribute b is b1 , and arbitrarily suboptimal given other values of b. Therefore bidding a1 and b1 may result in a poor allocation if the seller is '' outbid '' on b1 but left '' holding '' a1 .4 Instead of assuming full additivity , the auction designer can come up with a GAI preference structure that captures the set of common interdependencies between attributes .\nIf traders could bid on clusters of interdependent attributes , it would solve the problems discussed above .\nFor example , if a and b are interdependent ( meaning CDI ( a , b ) does not hold ) , we should be able to bid on the cluster ab .\nIf b in turn depends on c , we need another cluster bc .\nThis is still better than a general pricing structure that solicits bids for the cluster abc .\nWe stress that each trader may have a different set of interdependencies , and therefore to be completely general the 4If only the sellers are non-additive , the auction design could potentially alleviate this problem by collecting a new set of bids each round and `` forgetting '' bids from previous rounds , and also guiding non-additive sellers to bid on only one level per attribute in order to avoid undesired combinations .\nFigure 1 : ( i ) CDI map for { a , b , c } , reflecting the single condition CDI ( a , c ) .\n( ii ) The corresponding GAI network .\nGAI structure needs to account for all .5 However , in practice many domains have natural dependencies that are mutual to traders .\n5.2 GAI Trees\nAssume that preferences of all traders are reflected in a GAI structure I1 , ... , Ig .\nWe call each Ir a GAI element , and any assignment to Ir a sub-configuration .\nWe use 0r to denote the subconfiguration formed by projecting configuration 0 to element Ir .\nDEFINITION 9 .\nLet \u03b1 be an assignment to Ir and , Q an assignment to Ir .\nThe sub-configurations \u03b1 and , Q are consistent iffor any attribute aj \u2208 Ir \u2229 Ir , \u03b1 and , Q agree on the value of aj .\nA collection \u03bd of sub-configurations is consistent if all pairs \u03b1 , , Q \u2208 \u03bd are consistent .\nThe collection is called a cover if it contains exactly one sub-configuration \u03b1r corresponding to each element Ir ,\nNote that a consistent cover { \u03b11 , ... , \u03b1g } represents a full configuration , which we denote by ( \u03b11 , ... , \u03b1g ) .\nA GAI network is a graph G whose nodes correspond to the GAI elements I1 , ... , Ig , with an edge between Ir , Ir iff Ir \u2229 Ir = ~ \u2205 .\nEquivalently , a GAI network is the clique graph of a CDI-map .\nIn order to justify the compact pricing structure we require that for any set of optimal configurations ( wrt a given utility function ) , with a corresponding collection of sub-configurations - y , all consistent covers in - y must be optimal configurations as well .\nTo ensure this ( see Lemmas 8 and 10 ) , we assume a GAI decomposition in the form of a tree or a forest ( the GAI tree ) .\nA tree structure can be achieved for any set of CDI conditions by triangulation of the CDI-map prior to construction of the clique graph ( GAI networks and GAI trees are defined by Gonzales and Perny [ 14 ] , who also provide a triangulation algorithm ) .\nUnder GAI , the buyer 's value function ub and sellers ' cost functions ci can be decomposed as in ( 1 ) .\nWe use fb , r and fi , r to denote the local functions of buyer and sellers ( respectively ) , according to ( 4 ) .\nFor example , consider the procurement of a good with three attributes , a , b , c. Each attribute 's domain has two values ( e.g. , { a1 , a2 } is the domain of A ) .\nLet the GAI structure be I1 = { a , b } , I2 = { b , c } .\nFigure 1 shows the simple CDI map and the corresponding GAI network , which is a GAI tree .\nHere , subconfigurations are assignments of the form a1b1 , a1b2 , b1c1 , and so on .\nThe set of sub-configurations { a1b1 , b1c1 } is a consistent cover , corresponding to the configuration a1b1c1 .\nIn contrast , the set { a1b1 , b2c1 } is inconsistent .\n5.3 The GAI Auction\nWe define an iterative multiattribute auction that maintains a GAI pricing structure : that is , a price pt ( \u00b7 ) corresponding to each subconfiguration of each GAI-tree element .\nThe price of a configuration 0 at time t is defined as\nBidders submit sub-bids on sub-configurations and on an additional global discount term \u0394 .6 Sub-bids are always submitted for current prices , and need to be resubmitted at each round , therefore they do not need to explicitly carry the price .\nThe set of full bids of a seller contains all consistent covers that can be generated from that seller 's current set of sub-bids .\nThe existence of a full bid over a configuration \u03b8 represents the seller 's willingness to accept the price pt ( \u03b8 ) for supplying \u03b8 .\nAt the start of the auction , the buyer reports ( to the auction , not to sellers ) her complete valuation in GAI form .\nThe initial prices of sub-configurations are set at some level above the buyer 's valuations , that is , p1 ( \u03b8r ) > fb , r ( \u03b8r ) for all \u03b8r .\nThe discount \u0394 is initialized to zero .\nThe auction has the dynamics of a descending clock auction : at each round t , bids are collected for current prices and then prices are reduced according to price rules .\nA seller is considered active in a round if she submits at least one full bid .\nIn round t > 1 , only sellers who where active in round t \u2212 1 are allowed to participate , and the auction terminates when no more than a single seller is active .\nWe denote the set of sub-bids submitted by si by Bit , and the corresponding set of full bids is\nIn our example , a seller could submit sub-bids on a set of subconfigurations such as a1b1 and b1c1 , and that combines to a full bid on a1b1c1 .\nThe auction proceeds in two phases .\nIn the first phase ( A ) , at each round t the auction computes a set of preferred sub-configurations Mt. Section 5.4 shows how to define Mt to ensure convergence , and Section 5.5 shows how to efficiently compute it .\nIn phase A , the auction adjusts prices after each round , reducing the price of every sub-configuration that has received a bid but is not in the preferred set .\nLet be the prespecified price increment parameter .\nSpecifically , the phase A price change rule is applied to all \u03b8r \u2208 Sni = 1 Bti \\ Mt : The RHS maximum ensures that prices do not get reduced below the buyer 's valuation in phase A. Let Mt denote the set of configurations that are consistent covers in Mt :\nThe auction switches to phase B when all active sellers have at least one full bid in the buyer 's preferred set :\nLet T be the round at which [ SWITCH ] becomes true .\nAt this point , the auction selects the buyer-optimal full bid \u03b7i for each seller si .\nIn phase B , si may bid only on \u03b7i .\nThe prices of sub-configurations are fixed at pT ( \u00b7 ) during this phase .\nThe only adjustment in phase B is to \u0394 , which is increased in every round by .\nThe auction terminates when at most one seller ( if exactly one , designate it s\u02c6i ) is active .\nThere are four distinct cases :\n2 .\nAll active sellers drop out in the same round in phase B .\nThe auction selects the best seller ( s\u02c6i ) from the preceding round , and applies the applicable case below .\n3 .\nThe auction terminates in phase B with a final price above buyer 's valuation , pT ( \u03b7\u02c6i ) \u2212 \u0394 > ub ( \u03b7\u02c6i ) .\nThe auction offers the winner s\u02c6i an opportunity to supply \u03b7\u02c6i at price ub ( \u03b7\u02c6i ) .\n4 .\nThe auction terminates in phase B with a final price pT ( \u03b7\u02c6i ) \u2212 \u0394 \u2264 ub ( \u03b7\u02c6i ) .\nThis is the ideal situation , where the auction allocates the chosen configuration and seller at this resulting price .\nThe overall auction is described by high-level pseudocode in Algorithm 1 .\nAs explained in Section 5.4 , the role of phase A is to guide the traders to their efficient configurations .\nPhase B is a one-dimensional competition over the surplus that remaining seller candidates can provide to the buyer .\nIn Section 5.5 we discuss the computational tasks associated with the auction , and Section 5.6 provides a detailed example .\nimplement allocation and payment to winning seller\n5.4 Economic Analysis\nWhen the optimal solution to MAP ( 5 ) provides negative welfare and sellers do not bid below their cost , the auction terminates in phase A , no trade occurs and the auction is trivially efficient .\nWe therefore assume throughout the analysis that the optimal ( seller , configuration ) pair provides non-negative welfare .\nThe buyer profit from a configuration \u03b8 is defined as7\nand similarly \u03c0i ( \u03b8 ) = p ( \u03b8 ) \u2212 ci ( \u03b8 ) is the profit of si .\nIn addition , for \u03bc \u2286 { 1 , ... , g } we denote the corresponding set of subconfigurations by \u03b8\u03bc , and define the profit from a configuration \u03b8 over the subset \u03bc as\nThe function \u03c3i : \u0398 \u2192 R represents the welfare , or surplus function ub ( \u00b7 ) \u2212 ci ( \u00b7 ) .\nFor any price system p ,\n7We drop the t superscript in generic statements involving price and profit functions , understanding that all usage is with respect to the ( currently ) applicable prices .\nSince we do not assume anything about the buyer 's strategy , the analysis refers to profit and surplus with respect to the face value of the buyer 's report .\nThe functions \u03c0i and \u03c3i refer to the true cost functions of si .\nIntuitively , an SB seller follows a myopic best response strategy ( MBR ) , meaning they bid myopically rather than strategically by optimizing their profit with respect to current prices .\nTo calculate Bti sellers need to optimize their current profit function , as discussed in Section 4.2 .\nThe following lemma bridges the apparent gap between the compact pricing and bid structure and the global optimization performed by the traders .\nLEMMA 8 .\nLet \u03a8 be a set of configurations , all maximizing profit for a trader \u03c4 ( seller or buyer ) at the relevant prices .\nLet 4 ) = { \u03b8r | \u03b8 \u2208 \u03a8 , r \u2208 { 1 , ... , g } .\nThen any consistent cover in 4 ) is also a profit-maximizing configuration for \u03c4 .\nProof sketch ( full proof in the online appendix ) : A source of an element \u03b8r is a configuration \u03b8\u02dc \u2208 \u03a8 from which it originated ( mean\u02dc\u03b8r = \u03b8r ) .\nStarting from the supposedly suboptimal cover \u03b81 , we build a series of covers \u03b81 , ... , \u03b8L .\nAt each \u03b8j we flip the value of a set of sub-configurations \u03bcj corresponding to a subtree , with the sub-configurations of the configuration \u02c6\u03b8j \u2208 \u03a8 which is the source of the parent \u03b3j of \u03bcj .\nThat ensures that all elements in \u03bcj \u222a { \u03b3j } have a mutual source \u02c6\u03b8j .\nWe show that all \u03b8j are consistent and that they must all be suboptimal as well , and since all elements of \u03b8L have a mutual source , meaning \u03b8L = \u02c6\u03b8L \u2208 \u03a8 , it contradicts optimality of \u03a8 .\nNext we consider combinations of configurations that are only within some \u03b4 of optimality .\nLEMMA 10 .\nLet \u03a8 be a set of configurations , all are within \u03b4 of maximizing profit for a trader \u03c4 at the prices , and 4 ) defined as in Lemma 8 .\nThen any consistent cover in 4 ) is within \u03b4g of maximizing utility for \u03c4 .\nThis bound is tight , that is for any GAI tree and a non-trivial domain we can construct a set \u03a8 as above in which there exists a consistent cover whose utility is exactly \u03b4g below the maximal .\nNext we formally define Mt. For connected GAI trees , Mt is the set of sub-configurations that are part of a configuration within of optimal .\nWhen the GAI tree is in fact a forest , we apportion the error proportionally across the disconnected trees .\nLet G be comprised of trees G1 , ... , Gh .\nWe use \u03b8j to denote the projection of a configuration \u03b8 on the tree Gj , and gj denotes the number of GAI elements in Gj .\nLet ej = gj \u2212 1 denote the number of edges in Gj .\nWe define the connectivity parameter , e = maxj = 1 , ... , h ej .\nAs shown below , this connectivity parameter is an important factor in the performance of the auction .\nIn the fully additive case this loss of efficiency reduces to .\nOn the other extreme , if the GAI network is connected then e + 1 = g .\nWe also note that without assuming any preference structure , meaning that the CDI map is fully connected , g = 1 and the efficiency loss is again .\nLemmas 12 through 15 show that through the price system , the choice of buyer preferred configurations , and price change rules , Phase A leads the buyer and each of the sellers to their mutually efficient configuration .\nLEMMA 12 .\nmax\u03b8E\u0398 \u03c0tb ( \u03b8 ) does not change in any round t of phase A. PROOF .\nWe prove the lemma per each tree Gj .\nThe optimal values for disconnected components are independent of each other hence if the maximal profit for each component does not change the combined maximal profit does not change as well .\nIf the price of \u03b8 ' j was reduced during phase A , that is pt +1 ( \u03b8 ` j ) = pt ( \u03b8 ` j ) \u2212 \u03b4 , it must be the case that some w \u2264 gj sub-configurations of \u03b8 ' j are not in Mtj , and \u03b4 = w ~\nThis is true for any configuration whose profit improves , therefore the maximal buyer profit does not change during phase A. LEMMA 13 .\nThe price of at least one sub-configuration must be reduced at every round in phase A. PROOF .\nIn each round t < T of phase A there exists an active seller i for whom Bti \u2229 Mt = \u2205 .\nHowever to be active in round t , Bti = ~ \u2205 .\nLet \u03b8\u02c6 \u2208 Bti .\nIf \u2200 r.\u02c6\u03b8r \u2208 Mt , then \u03b8\u02c6 \u2208 Mt by definition of Mt. Therefore there must be \u02c6\u03b8r \u2208 ~ Mt. We need to prove that for at least one of these sub-configurations , \u03c0tb ( \u02c6\u03b8r ) < 0 to ensure activation of rule [ A ] .\n\u02c6\u03b8r \u2208 \u00af Mt , \u03c0tb ( \u02c6\u03b8r ) \u2265 0 .\nAssume for contradiction that for any For simplicity we assume that for any \u03b8r , \u03c01b ( \u03b8r ) is some product of ~ g ( that can be easily done ) , and that ensures that \u03c0tb ( \u02c6\u03b8r ) = 0 because once profit hits 0 it can not increase by rule [ A ] .\nIf \u02c6\u03b8r \u2208 \u00af Mt , \u2200 r = 1 , ... , g then \u03c0tb ( \u02c6\u03b8 ) = 0 .\nThis contradicts Lemma 12 since we set high initial prices .\nTherefore some of the sub-configurations of \u03b8\u02c6 are in Mt , and WLOG we assume it is \u02c6\u03b81 , ... , \u02c6\u03b8k .\nTo be in Mt these k sub-configurations must have been in some preferred full configuration , meaning there exists \u03b8 ' \u2208 Mt such that\nSince \u03b8\u02c6 \u2208 / Mt It must be that case that \u03c0tb ( \u02c6\u03b8 ) < \u03c0tb ( \u03b8 ' ) .\nTherefore\nHence for at least one r \u2208 { k + 1 , ... , g } , \u03c0tb ( \u03b8 ` r ) > 0 contradicting rule [ A ] .\nMt j = { \u03b8r | \u03c0tb ( \u03b8j ) \u2265 max \u03b8 ~ j E\u0398j Then define Mt = Shj = 1 Mtj .\nLEMMA 14 .\nWhen the solution to MAP provides positive surplus , and at least the best seller is SB , the auction must reach phase B. PROOF .\nBy Lemma 13 prices must go down in every round of phase A. Rule [ A ] sets a lower bound on all prices therefore the auction either terminates in phase A or must reach condition [ SWITCH ] .\nWe set the initial prices are high such that max\u03b8E\u0398 \u03c01b ( \u03b8 ) < 0 , and by Lemma 12 max\u03b8E\u0398 \u03c0tb ( \u03b8 ) < 0 during phase A .\nWe assume that the efficient allocation ( \u03b8 * , i * ) provides positive welfare , that is \u03c3i * ( \u03b8 * ) = \u03c0tb ( \u03b8 * ) + \u03c0t i * ( \u03b8 * ) > 0 .\nsi * is SB therefore she will leave the auction only when \u03c0t i * ( \u03b8 * ) < 0 .\nThis can happen only when \u03c0tb ( \u03b8 * ) > 0 , therefore si * does not drop in phase A hence the auction can not terminate before reaching condition [ SWITCH ] .\nLEMMA 15 .\nFor SB seller si , \u03b7i is ( e + 1 ) - efficient .\nPROOF .\n\u03b7i is chosen to maximize the buyer 's surplus out of Bti at the end of phase A .\nSince Bti \u2229 Mt = ~ \u2205 , clearly \u03b7i \u2208 Mt. From Corollary 11 and Corollary 9 , for any \u02dc\u03b8 ,\nThis establishes the approximate bilateral efficiency of the results of Phase A ( at this point under the assumption of SB ) .\nBased on Phase B 's simple role as a single-dimensional bidding competition over the discount , we next assert that the overall result is efficient under SB , which in turn proves to be an approximately ex-post equilibrium strategy in the two phases .\nLEMMA 16 .\nIf sellers si and sj are SB , and si is active at least as long as sj is active in phase B , then\nFollowing PK , we rely on an equivalence to the one-sided VCG auction to establish incentive properties for the sellers .\nIn the onesided multiattribute VCG auction , buyer and sellers report valuation and cost functions \u02c6ub , \u02c6ci , and the buyer pays the sell-side VCG payment to the winning seller .\nDEFINITION 11 .\nLet ( \u03b8 * , i * ) be the optimal solution to MAP .\nLet ( \u02dc\u03b8 , \u02dci ) be the best solution to MAP when i * does not participate .\nThe sell-side VCG payment is\nIt is well-known that truthful bidding is a dominant strategy for sellers in the one-sided VCG auction .\nIt is also shown by PK that the maximal regret for buyers from bidding truthfully in this mechanism is ub ( \u03b8 * ) \u2212 ci * ( \u03b8 * ) \u2212 ( ub ( \u02dc\u03b8 ) \u2212 \u02c6c\u02dci ( \u02dc\u03b8 ) ) , that is , the marginal product of the efficient seller .\nUsually in iterative auctions the VCG outcome is only nearly achieved , but the deviation is bounded by the minimal price change .\nWe show a similar result , and therefore define \u03b4-VCG payments .\nWhen payment is guaranteed to be \u03b4-VCG sellers can only affect their payment within that range , therefore their gain by falsely reporting their cost is bounded by 2\u03b4 .\nIn practice , however , sellers are unlikely to have the information that would let them exploit that potential gain .\nThey are much more likely to lose from bidding on their less attractive configurations .\n5.5 Computation and Complexity\nThe size of the price space maintained in the auction is equal to the total number of sub-configurations , meaning it is exponential in maxr | Ir | .\nThis is also equivalent to the tree-width ( plus one ) of the original CDI-map .\nFor the purpose of the computational analysis\nQ\nlet dj denote the domain of attribute aj , and I = Sg jEIr dj , r = 1 the collection of all sub-configurations .\nThe first purpose of this sub-section is to show that the complexity of all the computations required for the auction depends only on | I | , i.e. , no computation depends on the size of the full exponential domain .\nWe are first concerned with the computation of Mt. Since Mt grows monotonically with t , a naive application of optimization algorithm to generate the best outcomes sequentially might end up enumerating significant portions of the fully exponential domain .\nHowever as shown below this plain enumeration can be avoided .\nPROPOSITION 20 .\nThe computation of Mt can be done in time O ( | I | 2 ) .\nMoreover , the total time spent on this task throughout the auction is O ( | I | ( | I | + T ) ) .\nThe bounds are in practice significantly lower , based on results on similar problems from the probabilistic reasoning literature [ 18 ] .\nOne of the benefits of the compact pricing structure is the compact representation it lends for bids : sellers submit only sub-bids , and therefore the number of them submitted and stored per seller is bounded by | I | .\nSince the computation tasks : Bti = ~ \u2205 , rule [ SWITCH ] and choice of \u03b7i are all involving the set Bit , it is important to note that their performance only depend on the size of the set Bit , since they are all subsumed by the combinatorial optimization task over Bti or Bti \u2229 Mt. Next , we analyze the number of rounds it takes for the auction to terminate .\nPhase B requires maxi = 1 , ... n \u03c0Ti ( \u03b7i ) ~ 1 .\nSince this is equivalent to price-only auctions , the concern is only with the time complexity of phase A .\nSince prices can not go below fb , r ( \u03b8r ) , an upper bound on the number of rounds required is\nHowever phase A may converge faster .\nLet the initial negative profit chosen by the auctioneer be m = max\u03b8E\u0398 \u03c01b ( \u03b8 ) .\nIn the worst case phase A needs to run until \u2200 \u03b8 \u2208 \u0398.\u03c0b ( \u03b8 ) = m .\nThis happens for example when \u2200 \u03b8r \u2208 I.pt ( \u03b8r ) = fb , r ( \u03b8r ) + mg .\nIn general , the closer the initial prices reflect buyer valuation , the faster phase A converges .\nOne extreme is to choose p1 ( \u03b8r ) =\nTable 1 : GAI utility functions for the example domain .\nfb rep\nresents the buyer 's valuation , and f1 and f2 costs of the sellers s1 and s2 .\nfb , r ( \u03b8r ) + mg .\nThat would make phase A redundant , at the cost of full initial revelation of buyer 's valuation as done in other mechanisms discussed below .\nBetween this option and the other extreme , which is ` d\u03b1 , \u03b1\u02c6 G I , p1 ( \u03b1 ) = p1 ( \u02c6\u03b1 ) the auctioneer has a range of choices to determine the right tradeoff between convergence time and information revelation .\nIn the example below the choice of a lower initial price for the domain of I1 provides some speedup by revealing a harmless amount of information .\nAnother potential concern is the communication cost associated with the Japanese auction style .\nThe sellers need to send their bids over and over again at each round .\nA simple change can be made to avoid much of the redundant communication : the auction can retain sub-bids from previous rounds on sub-configurations whose price did not change .\nSince combinations of sub-bids from different rounds can yield sub-optimal configurations , each sub-bid should be tagged with the number of the latest round in which it was submitted , and only consistent combinations from the same round are considered to be full bids .\nWith this implementation sellers need not resubmit their bid until a price of at least one sub-configuration has changed .\n5.6 Example\nWe use the example settings introduced in Section 5.2 .\nRecall that the GAI structure is I1 = fa , b } , I2 = fb , c } ( note that e = 1 ) .\nTable 1 shows the GAI utilities for the buyer and the two sellers s1 , s2 .\nThe efficient allocation is ( s1 , a1b2c1 ) with a surplus of 45 .\nThe maximal surplus of the second best seller , s2 , is 25 , achieved by a1b1c1 , a2b1c1 , and a2b2c2 .\nWe set all initial prices over I1 to 75 , and all initial prices over I2 to 90 .\nWe set = 8 , meaning that price reduction for sub-configurations is 4 .\nThough with these numbers it is not guaranteed by Theorem 17 , we expect s1 to win on either the efficient allocation or on a1b2c2 which provides a surplus of 39 .\nThe reason is that these are the only two configurations which are within ( e + 1 ) = 16 of being efficient for s1 ( therefore one of them must be chosen by Phase A ) , and both provide more than surplus over s2 's most efficient configuration ( and this is sufficient in order to win in Phase B ) .\nTable 2 shows the progress of phase A. Initially all configuration have the same cost ( 165 ) , so sellers bid on their lowest cost configuration which is a2b1c1 for both ( with profit 80 to s1 and 90 to s2 ) , and that translates to sub-bids on a2b1 and b1c1 .\nM1 contains the sub-configurations a2b2 and b2c1 of the highest value configuration a2b2c1 .\nPrice is therefore decreased on a2b1 and b1c1 .\nAfter the price change , s1 has higher profit ( 74 ) on a1b2c2 and she therefore bids on a1b2 and b2c2 .\nNow ( round 2 ) their prices go down , reducing the profit on a1b2c2 to 66 and therefore in round 3 s1 prefers a2b1c2 ( profit 67 ) .\nAfter the next price change the configurations a1b2c1 and a1b2c2 both become optimal ( profit 66 ) , and the subbids a1b2 , b2c1 and b2c2 capture the two .\nThese configurations stay optimal for another round ( 5 ) , with profit 62 .\nAt this point s1 has a full bid ( in fact two full bids : a1b2c2 and a1b2c1 ) in M5 , and\nTable 2 : Auction progression in phase A. Sell bids and designation of Mt ( using * ) are shown below the price of each subconfiguration .\ntherefore she no longer changes her bids since the price of her optimal configurations does not decrease .\ns2 sticks to a2b1c1 during the first four rounds , switching to a1b1c1 in round 5 .\nIt takes four more rounds for s2 and Mt to converge ( M10 nB10 2 = fa1b1c1 } ) .\nAfter round 9 the auction sets \u03b71 = a1b2c1 ( which yields more buyer profit than a1b2c2 ) and \u03b72 = a1b1c1 .\nFor the next round ( 10 ) \u0394 = 8 , increased by 8 for each subsequent round .\nNote that p9 ( a1b1c1 ) = 133 , and c2 ( a1b1c1 ) = 90 , therefore \u03c0T2 ( \u03b72 ) = 43 .\nIn round 15 , \u0394 = 48 meaning p15 ( a1b1c1 ) = 85 and that causes s2 to drop out , setting the final allocation to ( s1 , a1b2c1 ) and p15 ( a1b2c1 ) = 157 \u2212 48 = 109 .\nThat leaves the buyer with a profit of 31 and s1 with a profit of 14 , less than below the VCG profit 20 .\nThe welfare achieved in this case is optimal .\nTo illustrate how some efficiency loss could occur consider the case that c1 ( b2c2 ) = 60 .\nIn that case , in round 3 the configuration a1b2c2 provides the same profit ( 67 ) as a2b1c2 , and s1 bids on both .\nWhile a2b1c2 is no longer optimal after the price change , a1b2c2 remains optimal on subsequent rounds because b2c2 G Mt , and the price change of a1b2 affects both a1b2c2 and the efficient configuration a1b2c1 .\nWhen phase A ends B101 n M10 = fa1b2c2 } so the auction terminates with the slightly suboptimal configuration and surplus 40 ."} {"id": "C-31", "title": "", "abstract": "", "keyphrases": ["peer-to-peer", "file share system", "intranet", "author", "document", "apocrita", "jxta", "distribut index", "peer-to-peer distribut model", "idl queri", "index file", "incom file", "p2p search", "p2p", "file share"], "prmu": [], "lvl-1": "Apocrita: A Distributed Peer-to-Peer File Sharing System for Intranets Joshua J. Reynolds, Robbie McLeod, Qusay H. Mahmoud Distributed Computing and Wireless & Telecommunications Technology University of Guelph-Humber Toronto, ON, M9W 5L7 Canada {jreyno04,rmcleo01,qmahmoud}@uoguelph.\nca ABSTRACT Many organizations are required to author documents for various purposes, and such documents may need to be accessible by all member of the organization.\nThis access may be needed for editing or simply viewing a document.\nIn some cases these documents are shared between authors, via email, to be edited.\nThis can easily cause incorrect version to be sent or conflicts created between multiple users trying to make amendments to a document.\nThere may even be multiple different documents in the process of being edited.\nThe user may be required to search for a particular document, which some search tools such as Google Desktop may be a solution for local documents but will not find a document on another user``s machine.\nAnother problem arises when a document is made available on a user``s machine and that user is offline, in which case the document is no longer accessible.\nIn this paper we present Apocrita, a revolutionary distributed P2P file sharing system for Intranets.\nCategories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems - Distributed applications.\nGeneral Terms Design, Experimentation, Performance.\n1.\nINTRODUCTION The Peer-to-Peer (P2P) computing paradigm is becoming a completely new form of mutual resource sharing over the Internet.\nWith the increasingly common place broadband Internet access, P2P technology has finally become a viable way to share documents and media files.\nThere are already programs on the market that enable P2P file sharing.\nThese programs enable millions of users to share files among themselves.\nWhile the utilization of P2P clients is already a gigantic step forward compared to downloading files off websites, using such programs are not without their problems.\nThe downloaded files still require a lot of manual management by the user.\nThe user still needs to put the files in the proper directory, manage files with multiple versions, delete the files when they are no longer wanted.\nWe strive to make the process of sharing documents within an Intranet easier.\nMany organizations are required to author documents for various purposes, and such documents may need to be accessible by all members of the organization.\nThis access may be needed for editing or simply viewing a document.\nIn some cases these documents are sent between authors, via email, to be edited.\nThis can easily cause incorrect version to be sent or conflicts created between multiple users trying to make amendments to a document.\nThere may even be multiple different documents in the process of being edited.\nThe user may be required to search for a particular document, which some search tools such as Google Desktop may be a solution for local documents but will not find a document on another user``s machine.\nFurthermore, some organizations do not have a file sharing server or the necessary network infrastructure to enable one.\nIn this paper we present Apocrita, which is a cost-effective distributed P2P file sharing system for such organizations.\nThe rest of this paper is organized as follows.\nIn section 2, we present Apocrita.\nThe distributed indexing mechanism and protocol are presented in Section 3.\nSection 4 presents the peer-topeer distribution model.\nA proof of concept prototype is presented in Section 5, and performance evaluations are discussed in Section 6.\nRelated work is presented is Section 7, and finally conclusions and future work are discussed in Section 8.\n2.\nAPOCRITA Apocrita is a distributed peer-to-peer file sharing system, and has been designed to make finding documents easier in an Intranet environment.\nCurrently, it is possible for documents to be located on a user's machine or on a remote machine.\nIt is even possible that different revisions could reside on each node on the Intranet.\nThis means there must be a manual process to maintain document versions.\nApocrita solves this problem using two approaches.\nFirst, due to the inherent nature of Apocrita, the document will only reside on a single logical location.\nSecond, Apocrita provides a method of reverting to previous document versions.\nApocrita Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.\nTo copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.\nACMSE``07, MARCH 23-24, 2007, WINSTON-SALEM, NC, USA.\nCOPYRIGHT 2007 ACM 978-1-59593-629-5/07/0003 ...$5.00.\n174 will also distribute documents across multiple machines to ensure high availability of important documents.\nFor example, if a machine contains an important document and the machine is currently inaccessible, the system is capable of maintaining availability of the document through this distribution mechanism.\nIt provides a simple interface for searching and accessing files that may exist either locally or remotely.\nThe distributed nature of the documents is transparent to the user.\nApocrita supports a decentralized network model where the peers use a discovery protocol to determine peers.\nApocrita is intended for network users on an Intranet.\nThe main focus is organizations that may not have a network large enough to require a file server and supporting infrastructure.\nIt eliminates the need for documents to be manually shared between users while being edited and reduces the possibility of conflicting versions being distributed.\nThe system also provides some redundancy and in the event of a single machine failure, no important documents will be lost.\nIt is operating system independent, and easy to access through a web browser or through a standalone application.\nTo decrease the time required for indexing a large number of documents, the indexing process is distributed across available idle nodes.\nLocal and remote files should be easily accessible through a virtual mountable file system, providing transparency for users.\n3.\nDISTRIBUTED INDEXING Apocrita uses a distributed index for all the documents that are available on the Intranet.\nEach node will contain part of the full index, and be aware of what part of the index each other node has.\nA node will be able to contact each node that contains a unique portion of the index.\nIn addition, each node has a separate local index of its own documents.\nBut as discussed later, in the current implementation, each node has a copy of the entire index.\nIndexing of the documents is distributed.\nTherefore, if a node is in the process of indexing many documents, it will break up the work over the nodes.\nOnce a node``s local index is updated with the new documents, the distributed index will then be updated.\nThe current distributed indexing system consists of three separate modules: NodeController, FileSender, and NodeIndexer.\nThe responsibility of each module is discussed later in this section.\n3.1 Indexing Protocol The protocol we have designed for the distributed indexing is depicted in Figure 1.\nFigure 1.\nApocrita distributed indexing protocol.\nIDLE QUERY: The IDLE QUERY is sent out from the initiating node to determine which other nodes may be able to help with the overall indexing process.\nThere are no parameters sent with the command.\nThe receiving node will respond with either a BUSY or IDLE command.\nIf the IDLE command is received, the initiating node will add the responding node to a list of available distributed indexing helpers.\nIn the case of a BUSY command being received, the responding node is ignored.\nBUSY: Once a node received an IDL QUERY, it will determine whether it can be considered a candidate for distributed indexing.\nThis determination is based on the overall CPU usage of the node.\nIf the node is using most of its CPU for other processes, the node will respond to the IDLE QUERY with a BUSY command.\nIDLE: As with the case of the BUSY response, the node receiving the IDLE QUERY will determine its eligibility for distributed indexing.\nTo be considered a candidate for distributed indexing, the overall CPU usage must be at a minimum to all for dedicated indexing of the distributed documents.\nIf this is the case, the node will respond with an IDLE command.\nINCOMING FILE: Once the initiating node assembles a set of idle nodes to assist with the distributed indexing, it will divide the documents to be sent to the nodes.\nTo do this, it sends an INCOMING FILE message, which contains the name of the file as well as the size in bytes.\nAfter the INCOMING FILE command has been sent, the initiating node will begin to stream the file to the other node.\nThe initiating node will loop through the files that are to be sent to the other node; each file stream being preceded by the INCOMING FILE command with the appropriate parameters.\nINDEX FILE: Once the indexing node has completed the indexing process of the set of files, it must send the resultant index back to the initiating node.\nThe index is comprised of multiple files, which exist on the file system of the indexing node.\nAs with the INCOMING FILE command, the indexing node streams each index file after sending an INDEX FILE command.\nThe INDEX FILE command has two parameters: the first being the name of the index, and the second is the size of the file in bytes.\nSEND COMPLETE: When sending the sets of files for both the index and the files to be indexed, the node must notify the corresponding node when the process is complete.\nOnce the initiating node is finished sending the set of documents to be indexed, it will then send a SEND COMPLETE command indicating to the indexing node that there are no more files and the node can proceed with indexing the files.\nIn the case of the initiating node sending the index files, the indexing node will complete the transfer with the SEND COMPLETE command indicating to the initiating node that there are no more index files to be sent and the initiating node can then assemble those index files into the main index.\nThe NodeController is responsible for setting up connections with nodes in the idle state to distribute the indexing process.\nUsing JXTA [5], the node controller will obtain a set of nodes.\nThis set of nodes is iterated and each one is sent the IDLE QUERY command.\nThe nodes that respond with idle are then collected.\nThe set of idle nodes includes the node initiating the distributed indexing process, referred to as the local node.\nOnce the collection of idle nodes is obtained, the node updates the set of controllers and evenly divides the set of documents that are to be indexed.\nFor example, if there are 100 documents and 10 nodes (including the local node) then each node will have 10 documents to index.\nFor each indexing node an instance of the FileSender object is created.\nThe FileSender is aware of the set of documents that node is responsible for.\nOnce a FileSender object has been created for each node, the NodeController waits for each FileSender to complete.\nWhen the FileSender objects have completed the NodeController will take the resultant indexes from 175 each node and pass them to an instance of the IndexCompiler, which maintains the index and the list of FileSenders.\nOnce the IndexCompiler has completed it will return to the idle state and activate the directory scanner to monitor the locally owned set of documents for changes that may require reindexing.\nThe NodeIndexer is responsible for receiving documents sent to it by the initiating node and then indexing them using the Lucene engine [7].\nOnce the indexing is complete the resulting index is streamed back to the initiating node as well as compiled in the indexer nodes own local index.\nBefore initiating the indexing process it must be sent an IDLE QUERY message.\nThis is the first command that sets off the indexing process.\nThe indexer node will determine whether it is considered idle based on the current CPU usage.\nAs outlined in the protocol section if the node is not being used and has a low overall CPU usage percentage it will return IDLE to the IDLE QUERY command.\nIf the indexer nodes CPU usage is above 50% for a specified amount of time it is then considered to be busy and will respond to the IDLE QUERY command with BUSY.\nIf a node is determined busy it returns to its listening state waiting for another IDLE QUERY from another initiating node.\nIf the node is determined to be idle it will enter the state where it will receive files from the initiating node that it is responsible for indexing.\nOnce all of the files are received by the initiating node, indicated by a SEND COMPLETE message, it starts an instance of the Lucene indexing engine.\nThe files are stored in a temporary directory separate from the nodes local documents that it is responsible for maintaining an index of.\nThe Lucene index writer then indexes all of the transferred files.\nThe index is stored on the drive within a temporary directory separate from the current index.\nAfter the indexing of the files completes the indexer node enters the state where the index files are sent back to the initiating node.\nThe indexer node loops through all of the files created by Lucene``s IndexWriter and streams them to the initiating node.\nOnce these files are sent back that index is then merged into the indexer nodes own full index of the existing files.\nIt then enters the idle state where it will then listen for any other nodes that required distributing the indexing process.\nThe FileSender object is the initiating node equivalent of the indexer node.\nIt initiates the communication between the initiating node and the node that will assist in the distributed indexing.\nThe initiating node runs many instances of the FileSender node one for each other node it has determined to be idle.\nUpon instantiation of the FileSender it is passed the node that it is responsible for contacting and the set of files that must be sent.\nThe FileSender``s first job is to send the files that are to be indexed by the other idle node.\nThe files are streamed one at a time to the other node.\nIt sends each file using the INCOMING FILE command.\nWith that command it sends the name of the file being sent and the size in bytes.\nOnce all files have been sent the FileSender sends the SEND COMPLETE command.\nThe FileSender creates an instance of Lucene``s IndexWriter and prepares to create the index in a temporary directory on the file system.\nThe FileSender will begin to receive the files that are to be saved within the index.\nIt receives an INDEX FILE command with the name of the files and the size in bytes.\nThis file is then streamed into the temporary index directory on the FileSender node.\nAfter the transfer of the index files has been completed the FileSender notifies the instance of the index compiler that it is ready to combine the index.\nEach instance of the FileSender has its own unique section of temporary space to store the index that has been transferred back from the indexing node.\nWhen notifying the IndexCompiler it will also pass the location of the particular FileSenders directory location of that index.\n4.\nPEER-TO-PEER DISTRIBUTION Apocrita uses a peer-to-peer distribution model in order to distribute files.\nFiles are distributed solely from a serving node to a client node without regard for the availability of file pieces from other clients in the network.\nThis means that the file transfers will be fast and efficient and should not severely affect the usability of serving nodes from the point of view of a local user.\nThe JXTA framework [5] is used in order to implement peer-to-peer functionality.\nThis has been decided due to the extremely shorttimeline of the project which allows us to take advantage of over five years of testing and development and support from many large organizations employing JXTA in their own products.\nWe are not concerned with any potential quality problems because JXTA is considered to be the most mature and stable peer-to-peer framework available.\nUsing JXTA terminology, there are three types of peers used in node classification.\nEdge peers are typically low-bandwidth, non-dedicated nodes.\nDue to these characteristics, edge peers are not used with Apocrita.\nRelay peers are typically higher-bandwidth, dedicated nodes.\nThis is the classification of all nodes in the Apocrita network, and, as such, are the default classification used.\nRendezvous peers are used to coordinate message passing between nodes in the Apocrita network.\nThis means that a minimum of one rendezvous peer per subnet is required.\n4.1 Peer Discovery The Apocrita server subsystem uses the JXTA Peer Discovery Protocol (PDP) in order to find participating peers within the network as shown in Figure 2.\nFigure 2.\nApocrita peer discovery process.\n176 The PDP listens for peer advertisements from other nodes in the Apocrita swarm.\nIf a peer advertisement is detected, the server will attempt to join the peer group and start actively contributing to the network.\nIf no peers are found by the discovery service, the server will create a new peer group and start advertising this peer group.\nThis new peer group will be periodically advertised on the network; any new peers joining the network will attach to this peer group.\nA distinct advantage of using the JXTA PDP is that Apocrita does not have to be sensitive to particular networking nuances such as Maximum Transmission Unit (MTU).\nIn addition, Apocrita does not have to support one-to-many packet delivery methods such as multicast and instead can rely on JXTA for this support.\n4.2 Index Query Operation All nodes in the Apocrita swarm have a complete and up-to-date copy of the network index stored locally.\nThis makes querying the index for search results trivial.\nUnlike the Gnutella protocol, a query does not have to propagate throughout the network.\nThis also means that the time to return query results is very fast - much faster than protocols that rely on nodes in the network to pass the query throughout the network and then wait for results.\nThis is demonstrated in Figure 3.\nFigure 3.\nApocrita query operation.\nEach document in the swarm has a unique document identification number (ID).\nA node will query the index and a result will be returned with both the document ID number as well as a list of peers with a copy of the matched document ID.\nIt is then the responsibility of the searching peer to contact the peers in the list to negotiate file transfer between the client and server.\n5.\nPROTOTYPE IMPLEMENTATION Apocrita uses the Lucene framework [7], which is a project under development by the Apache Software Foundation.\nApache Lucene is a high-performance, full-featured text search engine library written entirely in Java.\nIn the current implementation, Apocrita is only capable of indexing plain text documents.\nApocrita uses the JXTA framework [5] as a peer-to-peer transport library between nodes.\nJXTA is used to pass both messages and files between nodes in the search network.\nBy using JXTA, Apocrita takes advantage of a reliable, and proven peer-to-peer transport mechanism.\nIt uses the pipe facility in order to pass messages and files between nodes.\nThe pipe facility provides many different types of pipe advertisements.\nThis includes an unsecured unicast pipe, a secured unicast pipe, and a propagated unsecured pipe.\nMessage passing is used to pass status messages between nodes in order to aid in indexing, searching, and retrieval.\nFor example, a node attempting to find an idle node to participate in indexing will query nodes via the message facility.\nIdle nodes will reply with a status message to indicate they are available to start indexing.\nFile passing is used within Apocrita for file transfer.\nAfter a file has been searched for and located within the peer group, a JXTA socket will be opened and file transfer will take place.\nA JXTA socket is similar to a standard Java socket, however a JXTA socket uses JXTA pipes in underlying network transport.\nFile passing uses an unsecured unicast pipe in order to transfer data.\nFile passing is also used within Apocrita for index transfer.\nIndex transfer works exactly like a file transfer.\nIn fact, the index transfer actually passes the index as a file.\nHowever, there is one key difference between file transfer and index transfer.\nIn the case of file transfer, a socket is created between only two nodes.\nIn the case of index transfer, a socket must be created between all nodes in the network in order to pass the index, which allows for all nodes to have a full and complete index of the entire network.\nIn order to facilitate this transfer efficiently, index transfer will use an unsecured propagated pipe to communicate with all nodes in the Apocrita network.\n6.\nPERFORMANCE EVALUATION It is difficult to objectively benchmark the results obtained through Apocrita because there is no other system currently available with the same goals as Apocrita.\nWe have, however, evaluated the performance of the critical sections of the system.\nThe critical sections were determined to be the processes that are the most time intensive.\nThe evaluation was completed on standard lab computers on a 100Mb/s Ethernet LAN; the machines run Windows XP with a Pentium 4 CPU running at 2.4GHz with 512 MB of RAM.\nThe indexing time has been run against both: the Time Magazine collection [8], which contains 432 documents and 83 queries and their most relevant results, and the NPL collection [8] that has a total of 11,429 documents and 93 queries with expected results.\nEach document ranges in size between 4KB and 8KB.\nAs Figure 4 demonstrates, the number of nodes involved in the indexing process affects the time taken to complete the indexing processsometimes even drastically.\nFigure 4.\nNode vs. index time.\nThe difference in going from one indexing node to two indexing nodes is the most drastic and equates to an indexing time 37% faster than a single indexing node.\nThe different between two 177 indexing nodes and three indexing nodes is still significant and represents a 16% faster time than two indexing nodes.\nAs the number of indexing nodes increases the results are less dramatic.\nThis can be attributed to the time overhead associated with having many nodes perform indexing.\nThe time needed to communicate with a node is constant, so as the number of nodes increases, this constant becomes more prevalent.\nAlso, the complexity of joining the indexing results is a complex operation and is complicated further as the number of indexing nodes increases.\nSocket performance is also a very important part of Apocrita.\nBenchmarks were performed using a 65MB file on a system with both the client and server running locally.\nThis was done to isolate possible network issues.\nAlthough less drastic, similar results were shown when the client and server run on independent hardware.\nIn order to mitigate possible unexpected errors, each test was run 10 times.\nFigure 5.\nJava sockets vs. JXTA sockets.\nAs Figure 5 demonstrates, the performance of JXTA sockets is abysmal as compared to the performance of standard Java sockets.\nThe minimum transfer rate obtained using Java sockets is 81,945KB/s while the minimum transfer rater obtained using JXTA sockets is much lower at 3, 805KB/s.\nThe maximum transfer rater obtain using Java sockets is 97,412KB/s while the maximum transfer rate obtained using JXTA sockets is 5,530KB/s.\nFinally, the average transfer rate using Java sockets is 87,540KB/s while the average transfer rate using JXTA sockets is 4,293KB/s.\nThe major problem found in these benchmarks is that the underlying network transport mechanism does not perform as quickly or efficiently as expected.\nIn order to garner a performance increase, the JXTA framework needs to be substituted with a more traditional approach.\nThe indexing time is also a bottleneck and will need to be improved for the overall quality of Apocrita to be improved.\n7.\nRELATED WORK Several decentralized P2P systems [1, 2, 3] exist today that Apocrita features some of their functionality.\nHowever, Apocrita also has unique novel searching and indexing features that make this system unique.\nFor example, Majestic-12 [4] is a distributed search and indexing project designed for searching the Internet.\nEach user would install a client, which is responsible for indexing a portion of the web.\nA central area for querying the index is available on the Majestic-12 web page.\nThe index itself is not distributed, only the act of indexing is distributed.\nThe distributed indexing aspect of this project most closely relates Apocrita goals.\nYaCy [6] is a peer-to-peer web search application.\nYaCy consists of a web crawler, an indexer, a built-in database engine, and a p2p index exchange protocol.\nYaCy is designed to maintain a distributed index of the Internet.\nIt used a distributed hash table (DHT) to maintain the index.\nThe local node is used to query but all results that are returned are accessible on the Internet.\nYaCy used many peers and DHT to maintain a distributed index.\nApocrita will also use a distributed index in future implementations and may benefit from using an implementation of a DHT.\nYaCy however, is designed as a web search engine and, as such solves a much different problem than Apocrita.\n8.\nCONCLUSIONS AND FUTURE WORK We presented Apocrita, a distributed P2P searching and indexing system intended for network users on an Intranet.\nIt can help organizations with no network file server or necessary network infrastructure to share documents.\nIt eliminates the need for documents to be manually shared among users while being edited and reduce the possibility of conflicting versions being distributed.\nA proof of concept prototype has been constructed, but the results from measuring the network transport mechanism and the indexing time were not as impressive as initially envisioned.\nDespite these shortcomings, the experience gained from the design and implementation of Apocrita has given us more insight into building challenging distributed systems.\nFor future work, Apocrita will have a smart content distribution model in which a single instance of a file can intelligently and transparently replicate throughout the network to ensure a copy of every important file will always be available regardless of the availability of specific nodes in the network.\nIn addition, we plan to integrate a revision control system into the content distribution portion of Apocrita so that users could have the ability to update an existing file that they found and have the old revision maintained and the new revision propagated.\nFinally, the current implementation has some overhead and redundancy due to the fact that the entire index is maintained on each individual node, we plan to design a distributed index.\n9.\nREFERENCES [1] Rodrigues, R., Liskov, B., Shrira, L.: The Design of a Robust Peer-to-Peer System.\nAvailable online: http://www.pmg.lcs.mit.edu/~rodrigo/ew02-robust.pdf.\n[2] Chawathe, Y., Ratnasamy, S., Breslau, L., Lanham, N., and Chenker, S.: Making Gnutella-like P2P Systems Scalable.\nIn Proceedings of SIGCOMM``03, Karlsruhe, Germany.\n[3] Harvest: A Distributed Search System: http://harvest.sourceforge.net.\n[4] Majestic-12: Distributed Search Engine: http://www.majestic12.co.uk.\n[5] JXTA: http://www.jxta.org.\n[6] YaCy: Distributed P2P-based Web Indexing: http://www.yacy.net/yacy.\n[7] Lucene Search Engine Library: http://lucene.apache.org.\n[8] Test Collections (Time Magazine and NPL): www.dcs.gla.ac.uk/idom/ir_resources/test_collections.\n178", "lvl-3": "Apocrita : A Distributed Peer-to-Peer File Sharing System for Intranets\nABSTRACT\nMany organizations are required to author documents for various purposes , and such documents may need to be accessible by all member of the organization .\nThis access may be needed for editing or simply viewing a document .\nIn some cases these documents are shared between authors , via email , to be edited .\nThis can easily cause incorrect version to be sent or conflicts created between multiple users trying to make amendments to a document .\nThere may even be multiple different documents in the process of being edited .\nThe user may be required to search for a particular document , which some search tools such as Google Desktop may be a solution for local documents but will not find a document on another user 's machine .\nAnother problem arises when a document is made available on a user 's machine and that user is offline , in which case the document is no longer accessible .\nIn this paper we present Apocrita , a revolutionary distributed P2P file sharing system for Intranets .\n1 .\nINTRODUCTION\nThe Peer-to-Peer ( P2P ) computing paradigm is becoming a completely new form of mutual resource sharing over the Internet .\nWith the increasingly common place broadband Internet access , P2P technology has finally become a viable way to share documents and media files .\nThere are already programs on the market that enable P2P file sharing .\nThese programs enable millions of users to share files among themselves .\nWhile the utilization of P2P clients is already a gigantic step forward compared to downloading files off websites , using such programs are not without their problems .\nThe downloaded files still require a lot of manual management by the user .\nThe user still needs to put the files in the proper directory , manage files with multiple versions , delete the files when they are no longer wanted .\nWe strive to make the process of sharing documents within an Intranet easier .\nMany organizations are required to author documents for various purposes , and such documents may need to be accessible by all members of the organization .\nThis access may be needed for editing or simply viewing a document .\nIn some cases these documents are sent between authors , via email , to be edited .\nThis can easily cause incorrect version to be sent or conflicts created between multiple users trying to make amendments to a document .\nThere may even be multiple different documents in the process of being edited .\nThe user may be required to search for a particular document , which some search tools such as Google Desktop may be a solution for local documents but will not find a document on another user 's machine .\nFurthermore , some organizations do not have a file sharing server or the necessary network infrastructure to enable one .\nIn this paper we present Apocrita , which is a cost-effective distributed P2P file sharing system for such organizations .\nThe rest of this paper is organized as follows .\nIn section 2 , we present Apocrita .\nThe distributed indexing mechanism and protocol are presented in Section 3 .\nSection 4 presents the peer-topeer distribution model .\nA proof of concept prototype is presented in Section 5 , and performance evaluations are discussed in Section 6 .\nRelated work is presented is Section 7 , and finally conclusions and future work are discussed in Section 8 .\n2 .\nAPOCRITA\n3 .\nDISTRIBUTED INDEXING\n3.1 Indexing Protocol\n4 .\nPEER-TO-PEER DISTRIBUTION\n4.1 Peer Discovery\n4.2 Index Query Operation\n5 .\nPROTOTYPE IMPLEMENTATION\n6 .\nPERFORMANCE EVALUATION\n7 .\nRELATED WORK\nSeveral decentralized P2P systems [ 1 , 2 , 3 ] exist today that Apocrita features some of their functionality .\nHowever , Apocrita also has unique novel searching and indexing features that make this system unique .\nFor example , Majestic-12 [ 4 ] is a distributed search and indexing project designed for searching the Internet .\nEach user would install a client , which is responsible for indexing a portion of the web .\nA central area for querying the index is available on the Majestic-12 web page .\nThe index itself is not distributed , only the act of indexing is distributed .\nThe distributed indexing aspect of this project most closely relates Apocrita goals .\nYaCy [ 6 ] is a peer-to-peer web search application .\nYaCy consists of a web crawler , an indexer , a built-in database engine , and a p2p index exchange protocol .\nYaCy is designed to maintain a distributed index of the Internet .\nIt used a distributed hash table ( DHT ) to maintain the index .\nThe local node is used to query but all results that are returned are accessible on the Internet .\nYaCy used many peers and DHT to maintain a distributed index .\nApocrita will also use a distributed index in future implementations and may benefit from using an implementation of a DHT .\nYaCy however , is designed as a web search engine and , as such solves a much different problem than Apocrita .\n8 .\nCONCLUSIONS AND FUTURE WORK\nWe presented Apocrita , a distributed P2P searching and indexing system intended for network users on an Intranet .\nIt can help organizations with no network file server or necessary network infrastructure to share documents .\nIt eliminates the need for documents to be manually shared among users while being edited and reduce the possibility of conflicting versions being distributed .\nA proof of concept prototype has been constructed , but the results from measuring the network transport mechanism and the indexing time were not as impressive as initially envisioned .\nDespite these shortcomings , the experience gained from the design and implementation of Apocrita has given us more insight into building challenging distributed systems .\nFor future work , Apocrita will have a smart content distribution model in which a single instance of a file can intelligently and transparently replicate throughout the network to ensure a copy of every important file will always be available regardless of the availability of specific nodes in the network .\nIn addition , we plan to integrate a revision control system into the content distribution portion of Apocrita so that users could have the ability to update an existing file that they found and have the old revision maintained and the new revision propagated .\nFinally , the current implementation has some overhead and redundancy due to the fact that the entire index is maintained on each individual node , we plan to design a distributed index .", "lvl-4": "Apocrita : A Distributed Peer-to-Peer File Sharing System for Intranets\nABSTRACT\nMany organizations are required to author documents for various purposes , and such documents may need to be accessible by all member of the organization .\nThis access may be needed for editing or simply viewing a document .\nIn some cases these documents are shared between authors , via email , to be edited .\nThis can easily cause incorrect version to be sent or conflicts created between multiple users trying to make amendments to a document .\nThere may even be multiple different documents in the process of being edited .\nThe user may be required to search for a particular document , which some search tools such as Google Desktop may be a solution for local documents but will not find a document on another user 's machine .\nAnother problem arises when a document is made available on a user 's machine and that user is offline , in which case the document is no longer accessible .\nIn this paper we present Apocrita , a revolutionary distributed P2P file sharing system for Intranets .\n1 .\nINTRODUCTION\nThe Peer-to-Peer ( P2P ) computing paradigm is becoming a completely new form of mutual resource sharing over the Internet .\nWith the increasingly common place broadband Internet access , P2P technology has finally become a viable way to share documents and media files .\nThere are already programs on the market that enable P2P file sharing .\nThese programs enable millions of users to share files among themselves .\nThe downloaded files still require a lot of manual management by the user .\nThe user still needs to put the files in the proper directory , manage files with multiple versions , delete the files when they are no longer wanted .\nWe strive to make the process of sharing documents within an Intranet easier .\nMany organizations are required to author documents for various purposes , and such documents may need to be accessible by all members of the organization .\nThis access may be needed for editing or simply viewing a document .\nIn some cases these documents are sent between authors , via email , to be edited .\nThis can easily cause incorrect version to be sent or conflicts created between multiple users trying to make amendments to a document .\nThere may even be multiple different documents in the process of being edited .\nThe user may be required to search for a particular document , which some search tools such as Google Desktop may be a solution for local documents but will not find a document on another user 's machine .\nFurthermore , some organizations do not have a file sharing server or the necessary network infrastructure to enable one .\nIn this paper we present Apocrita , which is a cost-effective distributed P2P file sharing system for such organizations .\nIn section 2 , we present Apocrita .\nThe distributed indexing mechanism and protocol are presented in Section 3 .\nSection 4 presents the peer-topeer distribution model .\nA proof of concept prototype is presented in Section 5 , and performance evaluations are discussed in Section 6 .\nRelated work is presented is Section 7 , and finally conclusions and future work are discussed in Section 8 .\n7 .\nRELATED WORK\nSeveral decentralized P2P systems [ 1 , 2 , 3 ] exist today that Apocrita features some of their functionality .\nHowever , Apocrita also has unique novel searching and indexing features that make this system unique .\nFor example , Majestic-12 [ 4 ] is a distributed search and indexing project designed for searching the Internet .\nEach user would install a client , which is responsible for indexing a portion of the web .\nA central area for querying the index is available on the Majestic-12 web page .\nThe index itself is not distributed , only the act of indexing is distributed .\nThe distributed indexing aspect of this project most closely relates Apocrita goals .\nYaCy [ 6 ] is a peer-to-peer web search application .\nYaCy is designed to maintain a distributed index of the Internet .\nIt used a distributed hash table ( DHT ) to maintain the index .\nThe local node is used to query but all results that are returned are accessible on the Internet .\nYaCy used many peers and DHT to maintain a distributed index .\nApocrita will also use a distributed index in future implementations and may benefit from using an implementation of a DHT .\nYaCy however , is designed as a web search engine and , as such solves a much different problem than Apocrita .\n8 .\nCONCLUSIONS AND FUTURE WORK\nWe presented Apocrita , a distributed P2P searching and indexing system intended for network users on an Intranet .\nIt can help organizations with no network file server or necessary network infrastructure to share documents .\nIt eliminates the need for documents to be manually shared among users while being edited and reduce the possibility of conflicting versions being distributed .\nDespite these shortcomings , the experience gained from the design and implementation of Apocrita has given us more insight into building challenging distributed systems .", "lvl-2": "Apocrita : A Distributed Peer-to-Peer File Sharing System for Intranets\nABSTRACT\nMany organizations are required to author documents for various purposes , and such documents may need to be accessible by all member of the organization .\nThis access may be needed for editing or simply viewing a document .\nIn some cases these documents are shared between authors , via email , to be edited .\nThis can easily cause incorrect version to be sent or conflicts created between multiple users trying to make amendments to a document .\nThere may even be multiple different documents in the process of being edited .\nThe user may be required to search for a particular document , which some search tools such as Google Desktop may be a solution for local documents but will not find a document on another user 's machine .\nAnother problem arises when a document is made available on a user 's machine and that user is offline , in which case the document is no longer accessible .\nIn this paper we present Apocrita , a revolutionary distributed P2P file sharing system for Intranets .\n1 .\nINTRODUCTION\nThe Peer-to-Peer ( P2P ) computing paradigm is becoming a completely new form of mutual resource sharing over the Internet .\nWith the increasingly common place broadband Internet access , P2P technology has finally become a viable way to share documents and media files .\nThere are already programs on the market that enable P2P file sharing .\nThese programs enable millions of users to share files among themselves .\nWhile the utilization of P2P clients is already a gigantic step forward compared to downloading files off websites , using such programs are not without their problems .\nThe downloaded files still require a lot of manual management by the user .\nThe user still needs to put the files in the proper directory , manage files with multiple versions , delete the files when they are no longer wanted .\nWe strive to make the process of sharing documents within an Intranet easier .\nMany organizations are required to author documents for various purposes , and such documents may need to be accessible by all members of the organization .\nThis access may be needed for editing or simply viewing a document .\nIn some cases these documents are sent between authors , via email , to be edited .\nThis can easily cause incorrect version to be sent or conflicts created between multiple users trying to make amendments to a document .\nThere may even be multiple different documents in the process of being edited .\nThe user may be required to search for a particular document , which some search tools such as Google Desktop may be a solution for local documents but will not find a document on another user 's machine .\nFurthermore , some organizations do not have a file sharing server or the necessary network infrastructure to enable one .\nIn this paper we present Apocrita , which is a cost-effective distributed P2P file sharing system for such organizations .\nThe rest of this paper is organized as follows .\nIn section 2 , we present Apocrita .\nThe distributed indexing mechanism and protocol are presented in Section 3 .\nSection 4 presents the peer-topeer distribution model .\nA proof of concept prototype is presented in Section 5 , and performance evaluations are discussed in Section 6 .\nRelated work is presented is Section 7 , and finally conclusions and future work are discussed in Section 8 .\n2 .\nAPOCRITA\nApocrita is a distributed peer-to-peer file sharing system , and has been designed to make finding documents easier in an Intranet environment .\nCurrently , it is possible for documents to be located on a user 's machine or on a remote machine .\nIt is even possible that different revisions could reside on each node on the Intranet .\nThis means there must be a manual process to maintain document versions .\nApocrita solves this problem using two approaches .\nFirst , due to the inherent nature of Apocrita , the document will only reside on a single logical location .\nSecond , Apocrita provides a method of reverting to previous document versions .\nApocrita\nwill also distribute documents across multiple machines to ensure high availability of important documents .\nFor example , if a machine contains an important document and the machine is currently inaccessible , the system is capable of maintaining availability of the document through this distribution mechanism .\nIt provides a simple interface for searching and accessing files that may exist either locally or remotely .\nThe distributed nature of the documents is transparent to the user .\nApocrita supports a decentralized network model where the peers use a discovery protocol to determine peers .\nApocrita is intended for network users on an Intranet .\nThe main focus is organizations that may not have a network large enough to require a file server and supporting infrastructure .\nIt eliminates the need for documents to be manually shared between users while being edited and reduces the possibility of conflicting versions being distributed .\nThe system also provides some redundancy and in the event of a single machine failure , no important documents will be lost .\nIt is operating system independent , and easy to access through a web browser or through a standalone application .\nTo decrease the time required for indexing a large number of documents , the indexing process is distributed across available idle nodes .\nLocal and remote files should be easily accessible through a virtual mountable file system , providing transparency for users .\n3 .\nDISTRIBUTED INDEXING\nApocrita uses a distributed index for all the documents that are available on the Intranet .\nEach node will contain part of the full index , and be aware of what part of the index each other node has .\nA node will be able to contact each node that contains a unique portion of the index .\nIn addition , each node has a separate local index of its own documents .\nBut as discussed later , in the current implementation , each node has a copy of the entire index .\nIndexing of the documents is distributed .\nTherefore , if a node is in the process of indexing many documents , it will break up the work over the nodes .\nOnce a node 's local index is updated with the new documents , the distributed index will then be updated .\nThe current distributed indexing system consists of three separate modules : NodeController , FileSender , and NodeIndexer .\nThe responsibility of each module is discussed later in this section .\n3.1 Indexing Protocol\nThe protocol we have designed for the distributed indexing is depicted in Figure 1 .\nFigure 1 .\nApocrita distributed indexing protocol .\nIDLE QUERY : The IDLE QUERY is sent out from the initiating node to determine which other nodes may be able to help with the overall indexing process .\nThere are no parameters sent with the command .\nThe receiving node will respond with either a BUSY or IDLE command .\nIf the IDLE command is received , the initiating node will add the responding node to a list of available distributed indexing helpers .\nIn the case of a BUSY command being received , the responding node is ignored .\nBUSY : Once a node received an IDL QUERY , it will determine whether it can be considered a candidate for distributed indexing .\nThis determination is based on the overall CPU usage of the node .\nIf the node is using most of its CPU for other processes , the node will respond to the IDLE QUERY with a BUSY command .\nIDLE : As with the case of the BUSY response , the node receiving the IDLE QUERY will determine its eligibility for distributed indexing .\nTo be considered a candidate for distributed indexing , the overall CPU usage must be at a minimum to all for dedicated indexing of the distributed documents .\nIf this is the case , the node will respond with an IDLE command .\nINCOMING FILE : Once the initiating node assembles a set of idle nodes to assist with the distributed indexing , it will divide the documents to be sent to the nodes .\nTo do this , it sends an INCOMING FILE message , which contains the name of the file as well as the size in bytes .\nAfter the INCOMING FILE command has been sent , the initiating node will begin to stream the file to the other node .\nThe initiating node will loop through the files that are to be sent to the other node ; each file stream being preceded by the INCOMING FILE command with the appropriate parameters .\nINDEX FILE : Once the indexing node has completed the indexing process of the set of files , it must send the resultant index back to the initiating node .\nThe index is comprised of multiple files , which exist on the file system of the indexing node .\nAs with the INCOMING FILE command , the indexing node streams each index file after sending an INDEX FILE command .\nThe INDEX FILE command has two parameters : the first being the name of the index , and the second is the size of the file in bytes .\nSEND COMPLETE : When sending the sets of files for both the index and the files to be indexed , the node must notify the corresponding node when the process is complete .\nOnce the initiating node is finished sending the set of documents to be indexed , it will then send a SEND COMPLETE command indicating to the indexing node that there are no more files and the node can proceed with indexing the files .\nIn the case of the initiating node sending the index files , the indexing node will complete the transfer with the SEND COMPLETE command indicating to the initiating node that there are no more index files to be sent and the initiating node can then assemble those index files into the main index .\nThe NodeController is responsible for setting up connections with nodes in the idle state to distribute the indexing process .\nUsing JXTA [ 5 ] , the node controller will obtain a set of nodes .\nThis set of nodes is iterated and each one is sent the IDLE QUERY command .\nThe nodes that respond with idle are then collected .\nThe set of idle nodes includes the node initiating the distributed indexing process , referred to as the local node .\nOnce the collection of idle nodes is obtained , the node updates the set of controllers and evenly divides the set of documents that are to be indexed .\nFor example , if there are 100 documents and 10 nodes ( including the local node ) then each node will have 10 documents to index .\nFor each indexing node an instance of the FileSender object is created .\nThe FileSender is aware of the set of documents that node is responsible for .\nOnce a FileSender object has been created for each node , the NodeController waits for each FileSender to complete .\nWhen the FileSender objects have completed the NodeController will take the resultant indexes from\neach node and pass them to an instance of the IndexCompiler , which maintains the index and the list of FileSenders .\nOnce the IndexCompiler has completed it will return to the idle state and activate the directory scanner to monitor the locally owned set of documents for changes that may require reindexing .\nThe NodeIndexer is responsible for receiving documents sent to it by the initiating node and then indexing them using the Lucene engine [ 7 ] .\nOnce the indexing is complete the resulting index is streamed back to the initiating node as well as compiled in the indexer nodes own local index .\nBefore initiating the indexing process it must be sent an IDLE QUERY message .\nThis is the first command that sets off the indexing process .\nThe indexer node will determine whether it is considered idle based on the current CPU usage .\nAs outlined in the protocol section if the node is not being used and has a low overall CPU usage percentage it will return IDLE to the IDLE QUERY command .\nIf the indexer nodes CPU usage is above 50 % for a specified amount of time it is then considered to be busy and will respond to the IDLE QUERY command with BUSY .\nIf a node is determined busy it returns to its listening state waiting for another IDLE QUERY from another initiating node .\nIf the node is determined to be idle it will enter the state where it will receive files from the initiating node that it is responsible for indexing .\nOnce all of the files are received by the initiating node , indicated by a SEND COMPLETE message , it starts an instance of the Lucene indexing engine .\nThe files are stored in a temporary directory separate from the nodes local documents that it is responsible for maintaining an index of .\nThe Lucene index writer then indexes all of the transferred files .\nThe index is stored on the drive within a temporary directory separate from the current index .\nAfter the indexing of the files completes the indexer node enters the state where the index files are sent back to the initiating node .\nThe indexer node loops through all of the files created by Lucene 's IndexWriter and streams them to the initiating node .\nOnce these files are sent back that index is then merged into the indexer nodes own full index of the existing files .\nIt then enters the idle state where it will then listen for any other nodes that required distributing the indexing process .\nThe FileSender object is the initiating node equivalent of the indexer node .\nIt initiates the communication between the initiating node and the node that will assist in the distributed indexing .\nThe initiating node runs many instances of the FileSender node one for each other node it has determined to be idle .\nUpon instantiation of the FileSender it is passed the node that it is responsible for contacting and the set of files that must be sent .\nThe FileSender 's first job is to send the files that are to be indexed by the other idle node .\nThe files are streamed one at a time to the other node .\nIt sends each file using the INCOMING FILE command .\nWith that command it sends the name of the file being sent and the size in bytes .\nOnce all files have been sent the FileSender sends the SEND COMPLETE command .\nThe FileSender creates an instance of Lucene 's IndexWriter and prepares to create the index in a temporary directory on the file system .\nThe FileSender will begin to receive the files that are to be saved within the index .\nIt receives an INDEX FILE command with the name of the files and the size in bytes .\nThis file is then streamed into the temporary index directory on the FileSender node .\nAfter the transfer of the index files has been completed the FileSender notifies the instance of the index compiler that it is ready to combine the index .\nEach instance of the FileSender has its own unique section of temporary space to store the index that has been transferred back from the indexing node .\nWhen notifying the IndexCompiler it will also pass the location of the particular FileSenders directory location of that index .\n4 .\nPEER-TO-PEER DISTRIBUTION\nApocrita uses a peer-to-peer distribution model in order to distribute files .\nFiles are distributed solely from a serving node to a client node without regard for the availability of file pieces from other clients in the network .\nThis means that the file transfers will be fast and efficient and should not severely affect the usability of serving nodes from the point of view of a local user .\nThe JXTA framework [ 5 ] is used in order to implement peer-to-peer functionality .\nThis has been decided due to the extremely shorttimeline of the project which allows us to take advantage of over five years of testing and development and support from many large organizations employing JXTA in their own products .\nWe are not concerned with any potential quality problems because JXTA is considered to be the most mature and stable peer-to-peer framework available .\nUsing JXTA terminology , there are three types of peers used in node classification .\nEdge peers are typically low-bandwidth , non-dedicated nodes .\nDue to these characteristics , edge peers are not used with Apocrita .\nRelay peers are typically higher-bandwidth , dedicated nodes .\nThis is the classification of all nodes in the Apocrita network , and , as such , are the default classification used .\nRendezvous peers are used to coordinate message passing between nodes in the Apocrita network .\nThis means that a minimum of one rendezvous peer per subnet is required .\n4.1 Peer Discovery\nThe Apocrita server subsystem uses the JXTA Peer Discovery Protocol ( PDP ) in order to find participating peers within the network as shown in Figure 2 .\nFigure 2 .\nApocrita peer discovery process .\nThe PDP listens for peer advertisements from other nodes in the Apocrita swarm .\nIf a peer advertisement is detected , the server will attempt to join the peer group and start actively contributing to the network .\nIf no peers are found by the discovery service , the server will create a new peer group and start advertising this peer group .\nThis new peer group will be periodically advertised on the network ; any new peers joining the network will attach to this peer group .\nA distinct advantage of using the JXTA PDP is that Apocrita does not have to be sensitive to particular networking nuances such as Maximum Transmission Unit ( MTU ) .\nIn addition , Apocrita does not have to support one-to-many packet delivery methods such as multicast and instead can rely on JXTA for this support .\n4.2 Index Query Operation\nAll nodes in the Apocrita swarm have a complete and up-to-date copy of the network index stored locally .\nThis makes querying the index for search results trivial .\nUnlike the Gnutella protocol , a query does not have to propagate throughout the network .\nThis also means that the time to return query results is very fast -- much faster than protocols that rely on nodes in the network to pass the query throughout the network and then wait for results .\nThis is demonstrated in Figure 3 .\nFigure 3 .\nApocrita query operation .\nEach document in the swarm has a unique document identification number ( ID ) .\nA node will query the index and a result will be returned with both the document ID number as well as a list of peers with a copy of the matched document ID .\nIt is then the responsibility of the searching peer to contact the peers in the list to negotiate file transfer between the client and server .\n5 .\nPROTOTYPE IMPLEMENTATION\nApocrita uses the Lucene framework [ 7 ] , which is a project under development by the Apache Software Foundation .\nApache Lucene is a high-performance , full-featured text search engine library written entirely in Java .\nIn the current implementation , Apocrita is only capable of indexing plain text documents .\nApocrita uses the JXTA framework [ 5 ] as a peer-to-peer transport library between nodes .\nJXTA is used to pass both messages and files between nodes in the search network .\nBy using JXTA , Apocrita takes advantage of a reliable , and proven peer-to-peer transport mechanism .\nIt uses the pipe facility in order to pass messages and files between nodes .\nThe pipe facility provides many different types of pipe advertisements .\nThis includes an unsecured unicast pipe , a secured unicast pipe , and a propagated unsecured pipe .\nMessage passing is used to pass status messages between nodes in order to aid in indexing , searching , and retrieval .\nFor example , a node attempting to find an idle node to participate in indexing will query nodes via the message facility .\nIdle nodes will reply with a status message to indicate they are available to start indexing .\nFile passing is used within Apocrita for file transfer .\nAfter a file has been searched for and located within the peer group , a JXTA socket will be opened and file transfer will take place .\nA JXTA socket is similar to a standard Java socket , however a JXTA socket uses JXTA pipes in underlying network transport .\nFile passing uses an unsecured unicast pipe in order to transfer data .\nFile passing is also used within Apocrita for index transfer .\nIndex transfer works exactly like a file transfer .\nIn fact , the index transfer actually passes the index as a file .\nHowever , there is one key difference between file transfer and index transfer .\nIn the case of file transfer , a socket is created between only two nodes .\nIn the case of index transfer , a socket must be created between all nodes in the network in order to pass the index , which allows for all nodes to have a full and complete index of the entire network .\nIn order to facilitate this transfer efficiently , index transfer will use an unsecured propagated pipe to communicate with all nodes in the Apocrita network .\n6 .\nPERFORMANCE EVALUATION\nIt is difficult to objectively benchmark the results obtained through Apocrita because there is no other system currently available with the same goals as Apocrita .\nWe have , however , evaluated the performance of the critical sections of the system .\nThe critical sections were determined to be the processes that are the most time intensive .\nThe evaluation was completed on standard lab computers on a 100Mb/s Ethernet LAN ; the machines run Windows XP with a Pentium 4 CPU running at 2.4 GHz with 512 MB of RAM .\nThe indexing time has been run against both : the Time Magazine collection [ 8 ] , which contains 432 documents and 83 queries and their most relevant results , and the NPL collection [ 8 ] that has a total of 11,429 documents and 93 queries with expected results .\nEach document ranges in size between 4KB and 8KB .\nAs Figure 4 demonstrates , the number of nodes involved in the indexing process affects the time taken to complete the indexing process -- sometimes even drastically .\nFigure 4 .\nNode vs. index time .\nThe difference in going from one indexing node to two indexing nodes is the most drastic and equates to an indexing time 37 % faster than a single indexing node .\nThe different between two\nindexing nodes and three indexing nodes is still significant and represents a 16 % faster time than two indexing nodes .\nAs the number of indexing nodes increases the results are less dramatic .\nThis can be attributed to the time overhead associated with having many nodes perform indexing .\nThe time needed to communicate with a node is constant , so as the number of nodes increases , this constant becomes more prevalent .\nAlso , the complexity of joining the indexing results is a complex operation and is complicated further as the number of indexing nodes increases .\nSocket performance is also a very important part of Apocrita .\nBenchmarks were performed using a 65MB file on a system with both the client and server running locally .\nThis was done to isolate possible network issues .\nAlthough less drastic , similar results were shown when the client and server run on independent hardware .\nIn order to mitigate possible unexpected errors , each test was run 10 times .\nFigure 5 .\nJava sockets vs. JXTA sockets .\nAs Figure 5 demonstrates , the performance of JXTA sockets is abysmal as compared to the performance of standard Java sockets .\nThe minimum transfer rate obtained using Java sockets is 81,945 KB/s while the minimum transfer rater obtained using JXTA sockets is much lower at 3 , 805KB/s .\nThe maximum transfer rater obtain using Java sockets is 97,412 KB/s while the maximum transfer rate obtained using JXTA sockets is 5,530 KB/s .\nFinally , the average transfer rate using Java sockets is 87,540 KB/s while the average transfer rate using JXTA sockets is 4,293 KB/s .\nThe major problem found in these benchmarks is that the underlying network transport mechanism does not perform as quickly or efficiently as expected .\nIn order to garner a performance increase , the JXTA framework needs to be substituted with a more traditional approach .\nThe indexing time is also a bottleneck and will need to be improved for the overall quality of Apocrita to be improved .\n7 .\nRELATED WORK\nSeveral decentralized P2P systems [ 1 , 2 , 3 ] exist today that Apocrita features some of their functionality .\nHowever , Apocrita also has unique novel searching and indexing features that make this system unique .\nFor example , Majestic-12 [ 4 ] is a distributed search and indexing project designed for searching the Internet .\nEach user would install a client , which is responsible for indexing a portion of the web .\nA central area for querying the index is available on the Majestic-12 web page .\nThe index itself is not distributed , only the act of indexing is distributed .\nThe distributed indexing aspect of this project most closely relates Apocrita goals .\nYaCy [ 6 ] is a peer-to-peer web search application .\nYaCy consists of a web crawler , an indexer , a built-in database engine , and a p2p index exchange protocol .\nYaCy is designed to maintain a distributed index of the Internet .\nIt used a distributed hash table ( DHT ) to maintain the index .\nThe local node is used to query but all results that are returned are accessible on the Internet .\nYaCy used many peers and DHT to maintain a distributed index .\nApocrita will also use a distributed index in future implementations and may benefit from using an implementation of a DHT .\nYaCy however , is designed as a web search engine and , as such solves a much different problem than Apocrita .\n8 .\nCONCLUSIONS AND FUTURE WORK\nWe presented Apocrita , a distributed P2P searching and indexing system intended for network users on an Intranet .\nIt can help organizations with no network file server or necessary network infrastructure to share documents .\nIt eliminates the need for documents to be manually shared among users while being edited and reduce the possibility of conflicting versions being distributed .\nA proof of concept prototype has been constructed , but the results from measuring the network transport mechanism and the indexing time were not as impressive as initially envisioned .\nDespite these shortcomings , the experience gained from the design and implementation of Apocrita has given us more insight into building challenging distributed systems .\nFor future work , Apocrita will have a smart content distribution model in which a single instance of a file can intelligently and transparently replicate throughout the network to ensure a copy of every important file will always be available regardless of the availability of specific nodes in the network .\nIn addition , we plan to integrate a revision control system into the content distribution portion of Apocrita so that users could have the ability to update an existing file that they found and have the old revision maintained and the new revision propagated .\nFinally , the current implementation has some overhead and redundancy due to the fact that the entire index is maintained on each individual node , we plan to design a distributed index ."} {"id": "H-10", "title": "", "abstract": "", "keyphrases": ["document cluster", "regular", "global regular", "cluster hierarchi", "spectrum", "specifi search", "hierarch method", "partit method", "label predict", "function estim", "manifold", "document cluster"], "prmu": [], "lvl-1": "Regularized Clustering for Documents \u2217 Fei Wang, Changshui Zhang State Key Lab of Intelligent Tech.\nand Systems Department of Automation, Tsinghua University Beijing, China, 100084 feiwang03@gmail.com Tao Li School of Computer Science Florida International University Miami, FL 33199, U.S.A. taoli@cs.fiu.edu ABSTRACT In recent years, document clustering has been receiving more and more attentions as an important and fundamental technique for unsupervised document organization, automatic topic extraction, and fast information retrieval or filtering.\nIn this paper, we propose a novel method for clustering documents using regularization.\nUnlike traditional globally regularized clustering methods, our method first construct a local regularized linear label predictor for each document vector, and then combine all those local regularizers with a global smoothness regularizer.\nSo we call our algorithm Clustering with Local and Global Regularization (CLGR).\nWe will show that the cluster memberships of the documents can be achieved by eigenvalue decomposition of a sparse symmetric matrix, which can be efficiently solved by iterative methods.\nFinally our experimental evaluations on several datasets are presented to show the superiorities of CLGR over traditional document clustering methods.\nCategories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval-Clustering; I.2.6 [Artificial Intelligence]: Learning-Concept Learning General Terms Algorithms 1.\nINTRODUCTION Document clustering has been receiving more and more attentions as an important and fundamental technique for unsupervised document organization, automatic topic extraction, and fast information retrieval or filtering.\nA good document clustering approach can assist the computers to automatically organize the document corpus into a meaningful cluster hierarchy for efficient browsing and navigation, which is very valuable for complementing the deficiencies of traditional information retrieval technologies.\nAs pointed out by [8], the information retrieval needs can be expressed by a spectrum ranged from narrow keyword-matching based search to broad information browsing such as what are the major international events in recent months.\nTraditional document retrieval engines tend to fit well with the search end of the spectrum, i.e. they usually provide specified search for documents matching the user``s query, however, it is hard for them to meet the needs from the rest of the spectrum in which a rather broad or vague information is needed.\nIn such cases, efficient browsing through a good cluster hierarchy will be definitely helpful.\nGenerally, document clustering methods can be mainly categorized into two classes: hierarchical methods and partitioning methods.\nThe hierarchical methods group the data points into a hierarchical tree structure using bottom-up or top-down approaches.\nFor example, hierarchical agglomerative clustering (HAC) [13] is a typical bottom-up hierarchical clustering method.\nIt takes each data point as a single cluster to start off with and then builds bigger and bigger clusters by grouping similar data points together until the entire dataset is encapsulated into one final cluster.\nOn the other hand, partitioning methods decompose the dataset into a number of disjoint clusters which are usually optimal in terms of some predefined criterion functions.\nFor instance, K-means [13] is a typical partitioning method which aims to minimize the sum of the squared distance between the data points and their corresponding cluster centers.\nIn this paper, we will focus on the partitioning methods.\nAs we know that there are two main problems existing in partitioning methods (like Kmeans and Gaussian Mixture Model (GMM) [16]): (1) the predefined criterion is usually non-convex which causes many local optimal solutions; (2) the iterative procedure (e.g. the Expectation Maximization (EM) algorithm) for optimizing the criterions usually makes the final solutions heavily depend on the initializations.\nIn the last decades, many methods have been proposed to overcome the above problems of the partitioning methods [19][28].\nRecently, another type of partitioning methods based on clustering on data graphs have aroused considerable interests in the machine learning and data mining community.\nThe basic idea behind these methods is to first model the whole dataset as a weighted graph, in which the graph nodes represent the data points, and the weights on the edges correspond to the similarities between pairwise points.\nThen the cluster assignments of the dataset can be achieved by optimizing some criterions defined on the graph.\nFor example Spectral Clustering is one kind of the most representative graph-based clustering approaches, it generally aims to optimize some cut value (e.g. Normalized Cut [22], Ratio Cut [7], Min-Max Cut [11]) defined on an undirected graph.\nAfter some relaxations, these criterions can usually be optimized via eigen-decompositions, which is guaranteed to be global optimal.\nIn this way, spectral clustering efficiently avoids the problems of the traditional partitioning methods as we introduced in last paragraph.\nIn this paper, we propose a novel document clustering algorithm that inherits the superiority of spectral clustering, i.e. the final cluster results can also be obtained by exploit the eigen-structure of a symmetric matrix.\nHowever, unlike spectral clustering, which just enforces a smoothness constraint on the data labels over the whole data manifold [2], our method first construct a regularized linear label predictor for each data point from its neighborhood as in [25], and then combine the results of all these local label predictors with a global label smoothness regularizer.\nSo we call our method Clustering with Local and Global Regularization (CLGR).\nThe idea of incorporating both local and global information into label prediction is inspired by the recent works on semi-supervised learning [31], and our experimental evaluations on several real document datasets show that CLGR performs better than many state-of-the-art clustering methods.\nThe rest of this paper is organized as follows: in section 2 we will introduce our CLGR algorithm in detail.\nThe experimental results on several datasets are presented in section 3, followed by the conclusions and discussions in section 4.\n2.\nTHE PROPOSED ALGORITHM In this section, we will introduce our Clustering with Local and Global Regularization (CLGR) algorithm in detail.\nFirst let``s see the how the documents are represented throughout this paper.\n2.1 Document Representation In our work, all the documents are represented by the weighted term-frequency vectors.\nLet W = {w1, w2, \u00b7 \u00b7 \u00b7 , wm} be the complete vocabulary set of the document corpus (which is preprocessed by the stopwords removal and words stemming operations).\nThe term-frequency vector xi of document di is defined as xi = [xi1, xi2, \u00b7 \u00b7 \u00b7 , xim]T , xik = tik log n idfk , where tik is the term frequency of wk \u2208 W, n is the size of the document corpus, idfk is the number of documents that contain word wk.\nIn this way, xi is also called the TFIDF representation of document di.\nFurthermore, we also normalize each xi (1 i n) to have a unit length, so that each document is represented by a normalized TF-IDF vector.\n2.2 Local Regularization As its name suggests, CLGR is composed of two parts: local regularization and global regularization.\nIn this subsection we will introduce the local regularization part in detail.\n2.2.1 Motivation As we know that clustering is one type of learning techniques, it aims to organize the dataset in a reasonable way.\nGenerally speaking, learning can be posed as a problem of function estimation, from which we can get a good classification function that will assign labels to the training dataset and even the unseen testing dataset with some cost minimized [24].\nFor example, in the two-class classification scenario1 (in which we exactly know the label of each document), a linear classifier with least square fit aims to learn a column vector w such that the squared cost J = 1 n (wT xi \u2212 yi)2 (1) is minimized, where yi \u2208 {+1, \u22121} is the label of xi.\nBy taking \u2202J /\u2202w = 0, we get the solution w\u2217 = n i=1 xixT i \u22121 n i=1 xiyi , (2) which can further be written in its matrix form as w\u2217 = XXT \u22121 Xy, (3) where X = [x1, x2, \u00b7 \u00b7 \u00b7 , xn] is an m \u00d7 n document matrix, y = [y1, y2, \u00b7 \u00b7 \u00b7 , yn]T is the label vector.\nThen for a test document t, we can determine its label by l = sign(w\u2217T u), (4) where sign(\u00b7) is the sign function.\nA natural problem in Eq.\n(3) is that the matrix XXT may be singular and thus not invertable (e.g. when m n).\nTo avoid such a problem, we can add a regularization term and minimize the following criterion J = 1 n n i=1 (wT xi \u2212 yi)2 + \u03bb w 2 , (5) where \u03bb is a regularization parameter.\nThen the optimal solution that minimize J is given by w\u2217 = XXT + \u03bbnI \u22121 Xy, (6) where I is an m \u00d7 m identity matrix.\nIt has been reported that the regularized linear classifier can achieve very good results on text classification problems [29].\nHowever, despite its empirical success, the regularized linear classifier is on earth a global classifier, i.e. w\u2217 is estimated using the whole training set.\nAccording to [24], this may not be a smart idea, since a unique w\u2217 may not be good enough for predicting the labels of the whole input space.\nIn order to get better predictions, [6] proposed to train classifiers locally and use them to classify the testing points.\nFor example, a testing point will be classified by the local classifier trained using the training points located in the vicinity 1 In the following discussions we all assume that the documents coming from only two classes.\nThe generalizations of our method to multi-class cases will be discussed in section 2.5.\nof it.\nAlthough this method seems slow and stupid, it is reported that it can get better performances than using a unique global classifier on certain tasks [6].\n2.2.2 Constructing the Local Regularized Predictors Inspired by their success, we proposed to apply the local learning algorithms for clustering.\nThe basic idea is that, for each document vector xi (1 i n), we train a local label predictor based on its k-nearest neighborhood Ni, and then use it to predict the label of xi.\nFinally we will combine all those local predictors by minimizing the sum of their prediction errors.\nIn this subsection we will introduce how to construct those local predictors.\nDue to the simplicity and effectiveness of the regularized linear classifier that we have introduced in section 2.2.1, we choose it to be our local label predictor, such that for each document xi, the following criterion is minimized Ji = 1 ni xj \u2208Ni wT i xj \u2212 qj 2 + \u03bbi wi 2 , (7) where ni = |Ni| is the cardinality of Ni, and qj is the cluster membership of xj.\nThen using Eq.\n(6), we can get the optimal solution is w\u2217 i = XiXT i + \u03bbiniI \u22121 Xiqi, (8) where Xi = [xi1, xi2, \u00b7 \u00b7 \u00b7 , xini ], and we use xik to denote the k-th nearest neighbor of xi.\nqi = [qi1, qi2, \u00b7 \u00b7 \u00b7 , qini ]T with qik representing the cluster assignment of xik.\nThe problem here is that XiXT i is an m \u00d7 m matrix with m ni, i.e. we should compute the inverse of an m \u00d7 m matrix for every document vector, which is computationally prohibited.\nFortunately, we have the following theorem: Theorem 1.\nw\u2217 i in Eq.\n(8) can be rewritten as w\u2217 i = Xi XT i Xi + \u03bbiniIi \u22121 qi, (9) where Ii is an ni \u00d7 ni identity matrix.\nProof.\nSince w\u2217 i = XiXT i + \u03bbiniI \u22121 Xiqi, then XiXT i + \u03bbiniI w\u2217 i = Xiqi =\u21d2 XiXT i w\u2217 i + \u03bbiniw\u2217 i = Xiqi =\u21d2 w\u2217 i = (\u03bbini)\u22121 Xi qi \u2212 XT i w\u2217 i .\nLet \u03b2 = (\u03bbini)\u22121 qi \u2212 XT i w\u2217 i , then w\u2217 i = Xi\u03b2 =\u21d2 \u03bbini\u03b2 = qi \u2212 XT i w\u2217 i = qi \u2212 XT i Xi\u03b2 =\u21d2 qi = XT i Xi + \u03bbiniIi \u03b2 =\u21d2 \u03b2 = XT i Xi + \u03bbiniIi \u22121 qi.\nTherefore w\u2217 i = Xi\u03b2 = Xi XT i Xi + \u03bbiniIi \u22121 qi 2 Using theorem 1, we only need to compute the inverse of an ni \u00d7 ni matrix for every document to train a local label predictor.\nMoreover, for a new testing point u that falls into Ni, we can classify it by the sign of qu = w\u2217T i u = uT wi = uT Xi XT i Xi + \u03bbiniIi \u22121 qi.\nThis is an attractive expression since we can determine the cluster assignment of u by using the inner-products between the points in {u \u222a Ni}, which suggests that such a local regularizer can easily be kernelized [21] as long as we define a proper kernel function.\n2.2.3 Combining the Local Regularized Predictors After all the local predictors having been constructed, we will combine them together by minimizing Jl = n i=1 w\u2217T i xi \u2212 qi 2 , (10) which stands for the sum of the prediction errors for all the local predictors.\nCombining Eq.\n(10) with Eq.\n(6), we can get Jl = n i=1 w\u2217T i xi \u2212 qi 2 = n i=1 xT i Xi XT i Xi + \u03bbiniIi \u22121 qi \u2212 qi 2 = Pq \u2212 q 2 , (11) where q = [q1, q2, \u00b7 \u00b7 \u00b7 , qn]T , and the P is an n \u00d7 n matrix constructing in the following way.\nLet \u03b1i = xT i Xi XT i Xi + \u03bbiniIi \u22121 , then Pij = \u03b1i j, if xj \u2208 Ni 0, otherwise , (12) where Pij is the (i, j)-th entry of P, and \u03b1i j represents the j-th entry of \u03b1i .\nTill now we can write the criterion of clustering by combining locally regularized linear label predictors Jl in an explicit mathematical form, and we can minimize it directly using some standard optimization techniques.\nHowever, the results may not be good enough since we only exploit the local informations of the dataset.\nIn the next subsection, we will introduce a global regularization criterion and combine it with Jl, which aims to find a good clustering result in a local-global way.\n2.3 Global Regularization In data clustering, we usually require that the cluster assignments of the data points should be sufficiently smooth with respect to the underlying data manifold, which implies (1) the nearby points tend to have the same cluster assignments; (2) the points on the same structure (e.g. submanifold or cluster) tend to have the same cluster assignments [31].\nWithout the loss of generality, we assume that the data points reside (roughly) on a low-dimensional manifold M2 , and q is the cluster assignment function defined on M, i.e. 2 We believe that the text data are also sampled from some low dimensional manifold, since it is impossible for them to for \u2200x \u2208 M, q(x) returns the cluster membership of x.\nThe smoothness of q over M can be calculated by the following Dirichlet integral [2] D[q] = 1 2 M q(x) 2 dM, (13) where the gradient q is a vector in the tangent space T Mx, and the integral is taken with respect to the standard measure on M.\nIf we restrict the scale of q by q, q M = 1 (where \u00b7, \u00b7 M is the inner product induced on M), then it turns out that finding the smoothest function minimizing D[q] reduces to finding the eigenfunctions of the Laplace Beltrami operator L, which is defined as Lq \u2212div q, (14) where div is the divergence of a vector field.\nGenerally, the graph can be viewed as the discretized form of manifold.\nWe can model the dataset as an weighted undirected graph as in spectral clustering [22], where the graph nodes are just the data points, and the weights on the edges represent the similarities between pairwise points.\nThen it can be shown that minimizing Eq.\n(13) corresponds to minimizing Jg = qT Lq = n i=1 (qi \u2212 qj)2 wij, (15) where q = [q1, q2, \u00b7 \u00b7 \u00b7 , qn]T with qi = q(xi), L is the graph Laplacian with its (i, j)-th entry Lij = di \u2212 wii, if i = j \u2212wij, if xi and xj are adjacent 0, otherwise, (16) where di = j wij is the degree of xi, wij is the similarity between xi and xj.\nIf xi and xj are adjacent3 , wij is usually computed in the following way wij = e \u2212 xi\u2212xj 2 2\u03c32 , (17) where \u03c3 is a dataset dependent parameter.\nIt is proved that under certain conditions, such a form of wij to determine the weights on graph edges leads to the convergence of graph Laplacian to the Laplace Beltrami operator [3][18].\nIn summary, using Eq.\n(15) with exponential weights can effectively measure the smoothness of the data assignments with respect to the intrinsic data manifold.\nThus we adopt it as a global regularizer to punish the smoothness of the predicted data assignments.\n2.4 Clustering with Local and Global Regularization Combining the contents we have introduced in section 2.2 and section 2.3 we can derive the clustering criterion is minq J = Jl + \u03bbJg = Pq \u2212 q 2 + \u03bbqT Lq s.t. qi \u2208 {\u22121, +1}, (18) where P is defined as in Eq.\n(12), and \u03bb is a regularization parameter to trade off Jl and Jg.\nHowever, the discrete fill in the whole high-dimensional sample space.\nAnd it has been shown that the manifold based methods can achieve good results on text classification tasks [31].\n3 In this paper, we define xi and xj to be adjacent if xi \u2208 N(xj) or xj \u2208 N(xi).\nconstraint of pi makes the problem an NP hard integer programming problem.\nA natural way for making the problem solvable is to remove the constraint and relax qi to be continuous, then the objective that we aims to minimize becomes J = Pq \u2212 q 2 + \u03bbqT Lq = qT (P \u2212 I)T (P \u2212 I)q + \u03bbqT Lq = qT (P \u2212 I)T (P \u2212 I) + \u03bbL q, (19) and we further add a constraint qT q = 1 to restrict the scale of q.\nThen our objective becomes minq J = qT (P \u2212 I)T (P \u2212 I) + \u03bbL q s.t. qT q = 1 (20) Using the Lagrangian method, we can derive that the optimal solution q corresponds to the smallest eigenvector of the matrix M = (P \u2212 I)T (P \u2212 I) + \u03bbL, and the cluster assignment of xi can be determined by the sign of qi, i.e. xi will be classified as class one if qi > 0, otherwise it will be classified as class 2.\n2.5 Multi-Class CLGR In the above we have introduced the basic framework of Clustering with Local and Global Regularization (CLGR) for the two-class clustering problem, and we will extending it to multi-class clustering in this subsection.\nFirst we assume that all the documents belong to C classes indexed by L = {1, 2, \u00b7 \u00b7 \u00b7 , C}.\nqc is the classification function for class c (1 c C), such that qc (xi) returns the confidence that xi belongs to class c.\nOur goal is to obtain the value of qc (xi) (1 c C, 1 i n), and the cluster assignment of xi can be determined by {qc (xi)}C c=1 using some proper discretization methods that we will introduce later.\nTherefore, in this multi-class case, for each document xi (1 i n), we will construct C locally linear regularized label predictors whose normal vectors are wc\u2217 i = Xi XT i Xi + \u03bbiniIi \u22121 qc i (1 c C), (21) where Xi = [xi1, xi2, \u00b7 \u00b7 \u00b7 , xini ] with xik being the k-th neighbor of xi, and qc i = [qc i1, qc i2, \u00b7 \u00b7 \u00b7 , qc ini ]T with qc ik = qc (xik).\nThen (wc\u2217 i )T xi returns the predicted confidence of xi belonging to class c. Hence the local prediction error for class c can be defined as J c l = n i=1 (wc\u2217 i ) T xi \u2212 qc i 2 , (22) And the total local prediction error becomes Jl = C c=1 J c l = C c=1 n i=1 (wc\u2217 i ) T xi \u2212 qc i 2 .\n(23) As in Eq.\n(11), we can define an n\u00d7n matrix P (see Eq.\n(12)) and rewrite Jl as Jl = C c=1 J c l = C c=1 Pqc \u2212 qc 2 .\n(24) Similarly we can define the global smoothness regularizer in multi-class case as Jg = C c=1 n i=1 (qc i \u2212 qc j )2 wij = C c=1 (qc )T Lqc .\n(25) Then the criterion to be minimized for CLGR in multi-class case becomes J = Jl + \u03bbJg = C c=1 Pqc \u2212 qc 2 + \u03bb(qc )T Lqc = C c=1 (qc )T (P \u2212 I)T (P \u2212 I) + \u03bbL qc = trace QT (P \u2212 I)T (P \u2212 I) + \u03bbL Q , (26) where Q = [q1 , q2 , \u00b7 \u00b7 \u00b7 , qc ] is an n \u00d7 c matrix, and trace(\u00b7) returns the trace of a matrix.\nThe same as in Eq.\n(20), we also add the constraint that QT Q = I to restrict the scale of Q.\nThen our optimization problem becomes minQ J = trace QT (P \u2212 I)T (P \u2212 I) + \u03bbL Q s.t. QT Q = I, (27) From the Ky Fan theorem [28], we know the optimal solution of the above problem is Q\u2217 = [q\u2217 1, q\u2217 2, \u00b7 \u00b7 \u00b7 , q\u2217 C ]R, (28) where q\u2217 k (1 k C) is the eigenvector corresponds to the k-th smallest eigenvalue of matrix (P \u2212 I)T (P \u2212 I) + \u03bbL, and R is an arbitrary C \u00d7 C matrix.\nSince the values of the entries in Q\u2217 is continuous, we need to further discretize Q\u2217 to get the cluster assignments of all the data points.\nThere are mainly two approaches to achieve this goal: 1.\nAs in [20], we can treat the i-th row of Q as the embedding of xi in a C-dimensional space, and apply some traditional clustering methods like kmeans to clustering these embeddings into C clusters.\n2.\nSince the optimal Q\u2217 is not unique (because of the existence of an arbitrary matrix R), we can pursue an optimal R that will rotate Q\u2217 to an indication matrix4 .\nThe detailed algorithm can be referred to [26].\nThe detailed algorithm procedure for CLGR is summarized in table 1.\n3.\nEXPERIMENTS In this section, experiments are conducted to empirically compare the clustering results of CLGR with other 8 representitive document clustering algorithms on 5 datasets.\nFirst we will introduce the basic informations of those datasets.\n3.1 Datasets We use a variety of datasets, most of which are frequently used in the information retrieval research.\nTable 2 summarizes the characteristics of the datasets.\n4 Here an indication matrix T is a n\u00d7c matrix with its (i, j)th entry Tij \u2208 {0, 1} such that for each row of Q\u2217 there is only one 1.\nThen the xi can be assigned to the j-th cluster such that j = argjQ\u2217 ij = 1.\nTable 1: Clustering with Local and Global Regularization (CLGR) Input: 1.\nDataset X = {xi}n i=1; 2.\nNumber of clusters C; 3.\nSize of the neighborhood K; 4.\nLocal regularization parameters {\u03bbi}n i=1; 5.\nGlobal regularization parameter \u03bb; Output: The cluster membership of each data point.\nProcedure: 1.\nConstruct the K nearest neighborhoods for each data point; 2.\nConstruct the matrix P using Eq.\n(12); 3.\nConstruct the Laplacian matrix L using Eq.\n(16); 4.\nConstruct the matrix M = (P \u2212 I)T (P \u2212 I) + \u03bbL; 5.\nDo eigenvalue decomposition on M, and construct the matrix Q\u2217 according to Eq.\n(28); 6.\nOutput the cluster assignments of each data point by properly discretize Q\u2217 .\nTable 2: Descriptions of the document datasets Datasets Number of documents Number of classes CSTR 476 4 WebKB4 4199 4 Reuters 2900 10 WebACE 2340 20 Newsgroup4 3970 4 CSTR.\nThis is the dataset of the abstracts of technical reports published in the Department of Computer Science at a university.\nThe dataset contained 476 abstracts, which were divided into four research areas: Natural Language Processing(NLP), Robotics/Vision, Systems, and Theory.\nWebKB.\nThe WebKB dataset contains webpages gathered from university computer science departments.\nThere are about 8280 documents and they are divided into 7 categories: student, faculty, staff, course, project, department and other.\nThe raw text is about 27MB.\nAmong these 7 categories, student, faculty, course and project are four most populous entity-representing categories.\nThe associated subset is typically called WebKB4.\nReuters.\nThe Reuters-21578 Text Categorization Test collection contains documents collected from the Reuters newswire in 1987.\nIt is a standard text categorization benchmark and contains 135 categories.\nIn our experiments, we use a subset of the data collection which includes the 10 most frequent categories among the 135 topics and we call it Reuters-top 10.\nWebACE.\nThe WebACE dataset was from WebACE project and has been used for document clustering [17][5].\nThe WebACE dataset contains 2340 documents consisting news articles from Reuters new service via the Web in October 1997.\nThese documents are divided into 20 classes.\nNews4.\nThe News4 dataset used in our experiments are selected from the famous 20-newsgroups dataset5 .\nThe topic rec containing autos, motorcycles, baseball and hockey was selected from the version 20news-18828.\nThe News4 dataset contains 3970 document vectors.\n5 http://people.csail.mit.edu/jrennie/20Newsgroups/ To pre-process the datasets, we remove the stop words using a standard stop list, all HTML tags are skipped and all header fields except subject and organization of the posted articles are ignored.\nIn all our experiments, we first select the top 1000 words by mutual information with class labels.\n3.2 Evaluation Metrics In the experiments, we set the number of clusters equal to the true number of classes C for all the clustering algorithms.\nTo evaluate their performance, we compare the clusters generated by these algorithms with the true classes by computing the following two performance measures.\nClustering Accuracy (Acc).\nThe first performance measure is the Clustering Accuracy, which discovers the one-toone relationship between clusters and classes and measures the extent to which each cluster contained data points from the corresponding class.\nIt sums up the whole matching degree between all pair class-clusters.\nClustering accuracy can be computed as: Acc = 1 N max Ck,Lm T(Ck, Lm) , (29) where Ck denotes the k-th cluster in the final results, and Lm is the true m-th class.\nT(Ck, Lm) is the number of entities which belong to class m are assigned to cluster k. Accuracy computes the maximum sum of T(Ck, Lm) for all pairs of clusters and classes, and these pairs have no overlaps.\nThe greater clustering accuracy means the better clustering performance.\nNormalized Mutual Information (NMI).\nAnother evaluation metric we adopt here is the Normalized Mutual Information NMI [23], which is widely used for determining the quality of clusters.\nFor two random variable X and Y, the NMI is defined as: NMI(X, Y) = I(X, Y) H(X)H(Y) , (30) where I(X, Y) is the mutual information between X and Y, while H(X) and H(Y) are the entropies of X and Y respectively.\nOne can see that NMI(X, X) = 1, which is the maximal possible value of NMI.\nGiven a clustering result, the NMI in Eq.\n(30) is estimated as NMI = C k=1 C m=1 nk,mlog n\u00b7nk,m nk \u02c6nm C k=1 nklog nk n C m=1 \u02c6nmlog \u02c6nm n , (31) where nk denotes the number of data contained in the cluster Ck (1 k C), \u02c6nm is the number of data belonging to the m-th class (1 m C), and nk,m denotes the number of data that are in the intersection between the cluster Ck and the m-th class.\nThe value calculated in Eq.\n(31) is used as a performance measure for the given clustering result.\nThe larger this value, the better the clustering performance.\n3.3 Comparisons We have conducted comprehensive performance evaluations by testing our method and comparing it with 8 other representative data clustering methods using the same data corpora.\nThe algorithms that we evaluated are listed below.\n1.\nTraditional k-means (KM).\n2.\nSpherical k-means (SKM).\nThe implementation is based on [9].\n3.\nGaussian Mixture Model (GMM).\nThe implementation is based on [16].\n4.\nSpectral Clustering with Normalized Cuts (Ncut).\nThe implementation is based on [26], and the variance of the Gaussian similarity is determined by Local Scaling [30].\nNote that the criterion that Ncut aims to minimize is just the global regularizer in our CLGR algorithm except that Ncut used the normalized Laplacian.\n5.\nClustering using Pure Local Regularization (CPLR).\nIn this method we just minimize Jl (defined in Eq.\n(24)), and the clustering results can be obtained by doing eigenvalue decomposition on matrix (I \u2212 P)T (I \u2212 P) with some proper discretization methods.\n6.\nAdaptive Subspace Iteration (ASI).\nThe implementation is based on [14].\n7.\nNonnegative Matrix Factorization (NMF).\nThe implementation is based on [27].\n8.\nTri-Factorization Nonnegative Matrix Factorization (TNMF) [12].\nThe implementation is based on [15].\nFor computational efficiency, in the implementation of CPLR and our CLGR algorithm, we have set all the local regularization parameters {\u03bbi}n i=1 to be identical, which is set by grid search from {0.1, 1, 10}.\nThe size of the k-nearest neighborhoods is set by grid search from {20, 40, 80}.\nFor the CLGR method, its global regularization parameter is set by grid search from {0.1, 1, 10}.\nWhen constructing the global regularizer, we have adopted the local scaling method [30] to construct the Laplacian matrix.\nThe final discretization method adopted in these two methods is the same as in [26], since our experiments show that using such method can achieve better results than using kmeans based methods as in [20].\n3.4 Experimental Results The clustering accuracies comparison results are shown in table 3, and the normalized mutual information comparison results are summarized in table 4.\nFrom the two tables we mainly observe that: 1.\nOur CLGR method outperforms all other document clustering methods in most of the datasets; 2.\nFor document clustering, the Spherical k-means method usually outperforms the traditional k-means clustering method, and the GMM method can achieve competitive results compared to the Spherical k-means method; 3.\nThe results achieved from the k-means and GMM type algorithms are usually worse than the results achieved from Spectral Clustering.\nSince Spectral Clustering can be viewed as a weighted version of kernel k-means, it can obtain good results the data clusters are arbitrarily shaped.\nThis corroborates that the documents vectors are not regularly distributed (spherical or elliptical).\n4.\nThe experimental comparisons empirically verify the equivalence between NMF and Spectral Clustering, which Table 3: Clustering accuracies of the various methods CSTR WebKB4 Reuters WebACE News4 KM 0.4256 0.3888 0.4448 0.4001 0.3527 SKM 0.4690 0.4318 0.5025 0.4458 0.3912 GMM 0.4487 0.4271 0.4897 0.4521 0.3844 NMF 0.5713 0.4418 0.4947 0.4761 0.4213 Ncut 0.5435 0.4521 0.4896 0.4513 0.4189 ASI 0.5621 0.4752 0.5235 0.4823 0.4335 TNMF 0.6040 0.4832 0.5541 0.5102 0.4613 CPLR 0.5974 0.5020 0.4832 0.5213 0.4890 CLGR 0.6235 0.5228 0.5341 0.5376 0.5102 Table 4: Normalized mutual information results of the various methods CSTR WebKB4 Reuters WebACE News4 KM 0.3675 0.3023 0.4012 0.3864 0.3318 SKM 0.4027 0.4155 0.4587 0.4003 0.4085 GMM 0.4034 0.4093 0.4356 0.4209 0.3994 NMF 0.5235 0.4517 0.4402 0.4359 0.4130 Ncut 0.4833 0.4497 0.4392 0.4289 0.4231 ASI 0.5008 0.4833 0.4769 0.4817 0.4503 TNMF 0.5724 0.5011 0.5132 0.5328 0.4749 CPLR 0.5695 0.5231 0.4402 0.5543 0.4690 CLGR 0.6012 0.5434 0.4935 0.5390 0.4908 has been proved theoretically in [10].\nIt can be observed from the tables that NMF and Spectral Clustering usually lead to similar clustering results.\n5.\nThe co-clustering based methods (TNMF and ASI) can usually achieve better results than traditional purely document vector based methods.\nSince these methods perform an implicit feature selection at each iteration, provide an adaptive metric for measuring the neighborhood, and thus tend to yield better clustering results.\n6.\nThe results achieved from CPLR are usually better than the results achieved from Spectral Clustering, which supports Vapnik``s theory [24] that sometimes local learning algorithms can obtain better results than global learning algorithms.\nBesides the above comparison experiments, we also test the parameter sensibility of our method.\nThere are mainly two sets of parameters in our CLGR algorithm, the local and global regularization parameters ({\u03bbi}n i=1 and \u03bb, as we have said in section 3.3, we have set all \u03bbi``s to be identical to \u03bb\u2217 in our experiments), and the size of the neighborhoods.\nTherefore we have also done two sets of experiments: 1.\nFixing the size of the neighborhoods, and testing the clustering performance with varying \u03bb\u2217 and \u03bb.\nIn this set of experiments, we find that our CLGR algorithm can achieve good results when the two regularization parameters are neither too large nor too small.\nTypically our method can achieve good results when \u03bb\u2217 and \u03bb are around 0.1.\nFigure 1 shows us such a testing example on the WebACE dataset.\n2.\nFixing the local and global regularization parameters, and testing the clustering performance with different \u22125 \u22124.5 \u22124 \u22123.5 \u22123 \u22125 \u22124.5 \u22124 \u22123.5 \u22123 0.35 0.4 0.45 0.5 0.55 local regularization para (log 2 value) global regularization para (log 2 value) clusteringaccuracy Figure 1: Parameter sensibility testing results on the WebACE dataset with the neighborhood size fixed to 20, and the x-axis and y-axis represents the log2 value of \u03bb\u2217 and \u03bb.\nsizes of neighborhoods.\nIn this set of experiments, we find that the neighborhood with a too large or too small size will all deteriorate the final clustering results.\nThis can be easily understood since when the neighborhood size is very small, then the data points used for training the local classifiers may not be sufficient; when the neighborhood size is very large, the trained classifiers will tend to be global and cannot capture the typical local characteristics.\nFigure 2 shows us a testing example on the WebACE dataset.\nTherefore, we can see that our CLGR algorithm (1) can achieve satisfactory results and (2) is not very sensitive to the choice of parameters, which makes it practical in real world applications.\n4.\nCONCLUSIONS AND FUTURE WORKS In this paper, we derived a new clustering algorithm called clustering with local and global regularization.\nOur method preserves the merit of local learning algorithms and spectral clustering.\nOur experiments show that the proposed algorithm outperforms most of the state of the art algorithms on many benchmark datasets.\nIn the future, we will focus on the parameter selection and acceleration issues of the CLGR algorithm.\n5.\nREFERENCES [1] L. Baker and A. McCallum.\nDistributional Clustering of Words for Text Classification.\nIn Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval, 1998.\n[2] M. Belkin and P. Niyogi.\nLaplacian Eigenmaps for Dimensionality Reduction and Data Representation.\nNeural Computation, 15 (6):1373-1396.\nJune 2003.\n[3] M. Belkin and P. Niyogi.\nTowards a Theoretical Foundation for Laplacian-Based Manifold Methods.\nIn Proceedings of the 18th Conference on Learning Theory (COLT).\n2005.\n10 20 30 40 50 60 70 80 90 100 0.35 0.4 0.45 0.5 0.55 size of the neighborhood clusteringaccuracy Figure 2: Parameter sensibility testing results on the WebACE dataset with the regularization parameters being fixed to 0.1, and the neighborhood size varing from 10 to 100.\n[4] M. Belkin, P. Niyogi and V. Sindhwani.\nManifold Regularization: a Geometric Framework for Learning from Examples.\nJournal of Machine Learning Research 7, 1-48, 2006.\n[5] D. Boley.\nPrincipal Direction Divisive Partitioning.\nData mining and knowledge discovery, 2:325-344, 1998.\n[6] L. Bottou and V. Vapnik.\nLocal learning algorithms.\nNeural Computation, 4:888-900, 1992.\n[7] P. K. Chan, D. F. Schlag and J. Y. Zien.\nSpectral K-way Ratio-Cut Partitioning and Clustering.\nIEEE Trans.\nComputer-Aided Design, 13:1088-1096, Sep. 1994.\n[8] D. R. Cutting, D. R. Karger, J. O. Pederson and J. W. Tukey.\nScatter/Gather: A Cluster-Based Approach to Browsing Large Document Collections.\nIn Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval, 1992.\n[9] I. S. Dhillon and D. S. Modha.\nConcept Decompositions for Large Sparse Text Data using Clustering.\nMachine Learning, vol.\n42(1), pages 143-175, January 2001.\n[10] C. Ding, X. He, and H. Simon.\nOn the equivalence of nonnegative matrix factorization and spectral clustering.\nIn Proceedings of the SIAM Data Mining Conference, 2005.\n[11] C. Ding, X. He, H. Zha, M. Gu, and H. D. Simon.\nA min-max cut algorithm for graph partitioning and data clustering.\nIn Proc.\nof the 1st International Conference on Data Mining (ICDM), pages 107-114, 2001.\n[12] C. Ding, T. Li, W. Peng, and H. Park.\nOrthogonal Nonnegative Matrix Tri-Factorizations for Clustering.\nIn Proceedings of the Twelfth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2006.\n[13] R. O. Duda, P. E. Hart, and D. G. Stork.\nPattern Classification.\nJohn Wiley & Sons, Inc., 2001.\n[14] T. Li, S. Ma, and M. Ogihara.\nDocument Clustering via Adaptive Subspace Iteration.\nIn Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval, 2004.\n[15] T. Li and C. Ding.\nThe Relationships Among Various Nonnegative Matrix Factorization Methods for Clustering.\nIn Proceedings of the 6th International Conference on Data Mining (ICDM).\n2006.\n[16] X. Liu and Y. Gong.\nDocument Clustering with Cluster Refinement and Model Selection Capabilities.\nIn Proc.\nof the International ACM SIGIR Conference on Research and Development in Information Retrieval, 2002.\n[17] E. Han, D. Boley, M. Gini, R. Gross, K. Hastings, G. Karypis, V. Kumar, B. Mobasher, and J. Moore.\nWebACE: A Web Agent for Document Categorization and Exploration.\nIn Proceedings of the 2nd International Conference on Autonomous Agents (Agents98).\nACM Press, 1998.\n[18] M. Hein, J. Y. Audibert, and U. von Luxburg.\nFrom Graphs to Manifolds - Weak and Strong Pointwise Consistency of Graph Laplacians.\nIn Proceedings of the 18th Conference on Learning Theory (COLT), 470-485.\n2005.\n[19] J. He, M. Lan, C.-L.\nTan, S.-Y.\nSung, and H.-B.\nLow.\nInitialization of Cluster Refinement Algorithms: A Review and Comparative Study.\nIn Proc.\nof Inter.\nJoint Conference on Neural Networks, 2004.\n[20] A. Y. Ng, M. I. Jordan, Y. Weiss.\nOn Spectral Clustering: Analysis and an algorithm.\nIn Advances in Neural Information Processing Systems 14.\n2002.\n[21] B. Sch\u00a8olkopf and A. Smola.\nLearning with Kernels.\nThe MIT Press.\nCambridge, Massachusetts.\n2002.\n[22] J. Shi and J. Malik.\nNormalized Cuts and Image Segmentation.\nIEEE Trans.\non Pattern Analysis and Machine Intelligence, 22(8):888-905, 2000.\n[23] A. Strehl and J. Ghosh.\nCluster Ensembles - A Knowledge Reuse Framework for Combining Multiple Partitions.\nJournal of Machine Learning Research, 3:583-617, 2002.\n[24] V. N. Vapnik.\nThe Nature of Statistical Learning Theory.\nBerlin: Springer-Verlag, 1995.\n[25] Wu, M. and Sch\u00a8olkopf, B.\nA Local Learning Approach for Clustering.\nIn Advances in Neural Information Processing Systems 18.\n2006.\n[26] S. X. Yu, J. Shi.\nMulticlass Spectral Clustering.\nIn Proceedings of the International Conference on Computer Vision, 2003.\n[27] W. Xu, X. Liu and Y. Gong.\nDocument Clustering Based On Non-Negative Matrix Factorization.\nIn Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval, 2003.\n[28] H. Zha, X. He, C. Ding, M. Gu and H. Simon.\nSpectral Relaxation for K-means Clustering.\nIn NIPS 14.\n2001.\n[29] T. Zhang and F. J. Oles.\nText Categorization Based on Regularized Linear Classification Methods.\nJournal of Information Retrieval, 4:5-31, 2001.\n[30] L. Zelnik-Manor and P. Perona.\nSelf-Tuning Spectral Clustering.\nIn NIPS 17.\n2005.\n[31] D. Zhou, O. Bousquet, T. N. Lal, J. Weston and B. Sch\u00a8olkopf.\nLearning with Local and Global Consistency.\nNIPS 17, 2005.", "lvl-3": "Regularized Clustering for Documents *\nABSTRACT\nIn recent years , document clustering has been receiving more and more attentions as an important and fundamental technique for unsupervised document organization , automatic topic extraction , and fast information retrieval or filtering .\nIn this paper , we propose a novel method for clustering documents using regularization .\nUnlike traditional globally regularized clustering methods , our method first construct a local regularized linear label predictor for each document vector , and then combine all those local regularizers with a global smoothness regularizer .\nSo we call our algorithm Clustering with Local and Global Regularization ( CLGR ) .\nWe will show that the cluster memberships of the documents can be achieved by eigenvalue decomposition of a sparse symmetric matrix , which can be efficiently solved by iterative methods .\nFinally our experimental evaluations on several datasets are presented to show the superiorities of CLGR over traditional document clustering methods .\n1 .\nINTRODUCTION\nDocument clustering has been receiving more and more attentions as an important and fundamental technique for unsupervised document organization , automatic topic extraction , and fast information retrieval or filtering .\nA good document clustering approach can assist the computers to automatically organize the document corpus into a meaningful cluster hierarchy for efficient browsing and navigation , which is very valuable for complementing the deficiencies of traditional information retrieval technologies .\nAs pointed out by [ 8 ] , the information retrieval needs can be expressed by a spectrum ranged from narrow keyword-matching based search to broad information browsing such as what are the major international events in recent months .\nTraditional document retrieval engines tend to fit well with the search end of the spectrum , i.e. they usually provide specified search for documents matching the user 's query , however , it is hard for them to meet the needs from the rest of the spectrum in which a rather broad or vague information is needed .\nIn such cases , efficient browsing through a good cluster hierarchy will be definitely helpful .\nGenerally , document clustering methods can be mainly categorized into two classes : hierarchical methods and partitioning methods .\nThe hierarchical methods group the data points into a hierarchical tree structure using bottom-up or top-down approaches .\nFor example , hierarchical agglomerative clustering ( HAC ) [ 13 ] is a typical bottom-up hierarchical clustering method .\nIt takes each data point as a single cluster to start off with and then builds bigger and bigger clusters by grouping similar data points together until the entire dataset is encapsulated into one final cluster .\nOn the other hand , partitioning methods decompose the dataset into a number of disjoint clusters which are usually optimal in terms of some predefined criterion functions .\nFor instance , K-means [ 13 ] is a typical partitioning method which aims to minimize the sum of the squared distance between the data points and their corresponding cluster centers .\nIn this paper , we will focus on the partitioning methods .\nAs we know that there are two main problems existing in partitioning methods ( like Kmeans and Gaussian Mixture Model ( GMM ) [ 16 ] ) : ( 1 ) the predefined criterion is usually non-convex which causes many local optimal solutions ; ( 2 ) the iterative procedure ( e.g. the Expectation Maximization ( EM ) algorithm ) for optimizing the criterions usually makes the final solutions heavily depend on the initializations .\nIn the last decades , many methods have been proposed to overcome the above problems of the partitioning methods [ 19 ] [ 28 ] .\nRecently , another type of partitioning methods based on clustering on data graphs have aroused considerable interests in the machine learning and data mining community .\nThe basic idea behind these methods is to first model the whole dataset as a weighted graph , in which the graph nodes represent the data points , and the weights on the edges correspond to the similarities between pairwise points .\nThen the cluster assignments of the dataset can be achieved by optimizing some criterions defined on the graph .\nFor example Spectral Clustering is one kind of the most representative graph-based clustering approaches , it generally aims to optimize some cut value ( e.g. Normalized Cut [ 22 ] , Ratio Cut [ 7 ] , Min-Max Cut [ 11 ] ) defined on an undirected graph .\nAfter some relaxations , these criterions can usually be optimized via eigen-decompositions , which is guaranteed to be global optimal .\nIn this way , spectral clustering efficiently avoids the problems of the traditional partitioning methods as we introduced in last paragraph .\nIn this paper , we propose a novel document clustering algorithm that inherits the superiority of spectral clustering , i.e. the final cluster results can also be obtained by exploit the eigen-structure of a symmetric matrix .\nHowever , unlike spectral clustering , which just enforces a smoothness constraint on the data labels over the whole data manifold [ 2 ] , our method first construct a regularized linear label predictor for each data point from its neighborhood as in [ 25 ] , and then combine the results of all these local label predictors with a global label smoothness regularizer .\nSo we call our method Clustering with Local and Global Regularization ( CLGR ) .\nThe idea of incorporating both local and global information into label prediction is inspired by the recent works on semi-supervised learning [ 31 ] , and our experimental evaluations on several real document datasets show that CLGR performs better than many state-of-the-art clustering methods .\nThe rest of this paper is organized as follows : in section 2 we will introduce our CLGR algorithm in detail .\nThe experimental results on several datasets are presented in section 3 , followed by the conclusions and discussions in section 4 .\n2 .\nTHE PROPOSED ALGORITHM\n2.1 Document Representation\n2.2 Local Regularization\n2.2.1 Motivation\n2.2.2 Constructing the Local Regularized Predictors\n2.2.3 Combining the Local Regularized Predictors\n2.3 Global Regularization\n2.4 Clustering with Local and Global Regularization\n2.5 Multi-Class CLGR\n3 .\nEXPERIMENTS\n3.1 Datasets\n3.2 Evaluation Metrics\nCk , Lm\n3.3 Comparisons\n3.4 Experimental Results\n4 .\nCONCLUSIONS AND FUTURE WORKS\nIn this paper , we derived a new clustering algorithm called clustering with local and global regularization .\nOur method preserves the merit of local learning algorithms and spectral clustering .\nOur experiments show that the proposed algorithm outperforms most of the state of the art algorithms on many benchmark datasets .\nIn the future , we will focus on the parameter selection and acceleration issues of the CLGR algorithm .", "lvl-4": "Regularized Clustering for Documents *\nABSTRACT\nIn recent years , document clustering has been receiving more and more attentions as an important and fundamental technique for unsupervised document organization , automatic topic extraction , and fast information retrieval or filtering .\nIn this paper , we propose a novel method for clustering documents using regularization .\nUnlike traditional globally regularized clustering methods , our method first construct a local regularized linear label predictor for each document vector , and then combine all those local regularizers with a global smoothness regularizer .\nSo we call our algorithm Clustering with Local and Global Regularization ( CLGR ) .\nWe will show that the cluster memberships of the documents can be achieved by eigenvalue decomposition of a sparse symmetric matrix , which can be efficiently solved by iterative methods .\nFinally our experimental evaluations on several datasets are presented to show the superiorities of CLGR over traditional document clustering methods .\n1 .\nINTRODUCTION\nDocument clustering has been receiving more and more attentions as an important and fundamental technique for unsupervised document organization , automatic topic extraction , and fast information retrieval or filtering .\nA good document clustering approach can assist the computers to automatically organize the document corpus into a meaningful cluster hierarchy for efficient browsing and navigation , which is very valuable for complementing the deficiencies of traditional information retrieval technologies .\nIn such cases , efficient browsing through a good cluster hierarchy will be definitely helpful .\nGenerally , document clustering methods can be mainly categorized into two classes : hierarchical methods and partitioning methods .\nThe hierarchical methods group the data points into a hierarchical tree structure using bottom-up or top-down approaches .\nFor example , hierarchical agglomerative clustering ( HAC ) [ 13 ] is a typical bottom-up hierarchical clustering method .\nIt takes each data point as a single cluster to start off with and then builds bigger and bigger clusters by grouping similar data points together until the entire dataset is encapsulated into one final cluster .\nOn the other hand , partitioning methods decompose the dataset into a number of disjoint clusters which are usually optimal in terms of some predefined criterion functions .\nFor instance , K-means [ 13 ] is a typical partitioning method which aims to minimize the sum of the squared distance between the data points and their corresponding cluster centers .\nIn this paper , we will focus on the partitioning methods .\nIn the last decades , many methods have been proposed to overcome the above problems of the partitioning methods [ 19 ] [ 28 ] .\nRecently , another type of partitioning methods based on clustering on data graphs have aroused considerable interests in the machine learning and data mining community .\nThe basic idea behind these methods is to first model the whole dataset as a weighted graph , in which the graph nodes represent the data points , and the weights on the edges correspond to the similarities between pairwise points .\nThen the cluster assignments of the dataset can be achieved by optimizing some criterions defined on the graph .\nAfter some relaxations , these criterions can usually be optimized via eigen-decompositions , which is guaranteed to be global optimal .\nIn this way , spectral clustering efficiently avoids the problems of the traditional partitioning methods as we introduced in last paragraph .\nIn this paper , we propose a novel document clustering algorithm that inherits the superiority of spectral clustering , i.e. the final cluster results can also be obtained by exploit the eigen-structure of a symmetric matrix .\nSo we call our method Clustering with Local and Global Regularization ( CLGR ) .\nThe idea of incorporating both local and global information into label prediction is inspired by the recent works on semi-supervised learning [ 31 ] , and our experimental evaluations on several real document datasets show that CLGR performs better than many state-of-the-art clustering methods .\nThe rest of this paper is organized as follows : in section 2 we will introduce our CLGR algorithm in detail .\nThe experimental results on several datasets are presented in section 3 , followed by the conclusions and discussions in section 4 .\n4 .\nCONCLUSIONS AND FUTURE WORKS\nIn this paper , we derived a new clustering algorithm called clustering with local and global regularization .\nOur method preserves the merit of local learning algorithms and spectral clustering .\nOur experiments show that the proposed algorithm outperforms most of the state of the art algorithms on many benchmark datasets .\nIn the future , we will focus on the parameter selection and acceleration issues of the CLGR algorithm .", "lvl-2": "Regularized Clustering for Documents *\nABSTRACT\nIn recent years , document clustering has been receiving more and more attentions as an important and fundamental technique for unsupervised document organization , automatic topic extraction , and fast information retrieval or filtering .\nIn this paper , we propose a novel method for clustering documents using regularization .\nUnlike traditional globally regularized clustering methods , our method first construct a local regularized linear label predictor for each document vector , and then combine all those local regularizers with a global smoothness regularizer .\nSo we call our algorithm Clustering with Local and Global Regularization ( CLGR ) .\nWe will show that the cluster memberships of the documents can be achieved by eigenvalue decomposition of a sparse symmetric matrix , which can be efficiently solved by iterative methods .\nFinally our experimental evaluations on several datasets are presented to show the superiorities of CLGR over traditional document clustering methods .\n1 .\nINTRODUCTION\nDocument clustering has been receiving more and more attentions as an important and fundamental technique for unsupervised document organization , automatic topic extraction , and fast information retrieval or filtering .\nA good document clustering approach can assist the computers to automatically organize the document corpus into a meaningful cluster hierarchy for efficient browsing and navigation , which is very valuable for complementing the deficiencies of traditional information retrieval technologies .\nAs pointed out by [ 8 ] , the information retrieval needs can be expressed by a spectrum ranged from narrow keyword-matching based search to broad information browsing such as what are the major international events in recent months .\nTraditional document retrieval engines tend to fit well with the search end of the spectrum , i.e. they usually provide specified search for documents matching the user 's query , however , it is hard for them to meet the needs from the rest of the spectrum in which a rather broad or vague information is needed .\nIn such cases , efficient browsing through a good cluster hierarchy will be definitely helpful .\nGenerally , document clustering methods can be mainly categorized into two classes : hierarchical methods and partitioning methods .\nThe hierarchical methods group the data points into a hierarchical tree structure using bottom-up or top-down approaches .\nFor example , hierarchical agglomerative clustering ( HAC ) [ 13 ] is a typical bottom-up hierarchical clustering method .\nIt takes each data point as a single cluster to start off with and then builds bigger and bigger clusters by grouping similar data points together until the entire dataset is encapsulated into one final cluster .\nOn the other hand , partitioning methods decompose the dataset into a number of disjoint clusters which are usually optimal in terms of some predefined criterion functions .\nFor instance , K-means [ 13 ] is a typical partitioning method which aims to minimize the sum of the squared distance between the data points and their corresponding cluster centers .\nIn this paper , we will focus on the partitioning methods .\nAs we know that there are two main problems existing in partitioning methods ( like Kmeans and Gaussian Mixture Model ( GMM ) [ 16 ] ) : ( 1 ) the predefined criterion is usually non-convex which causes many local optimal solutions ; ( 2 ) the iterative procedure ( e.g. the Expectation Maximization ( EM ) algorithm ) for optimizing the criterions usually makes the final solutions heavily depend on the initializations .\nIn the last decades , many methods have been proposed to overcome the above problems of the partitioning methods [ 19 ] [ 28 ] .\nRecently , another type of partitioning methods based on clustering on data graphs have aroused considerable interests in the machine learning and data mining community .\nThe basic idea behind these methods is to first model the whole dataset as a weighted graph , in which the graph nodes represent the data points , and the weights on the edges correspond to the similarities between pairwise points .\nThen the cluster assignments of the dataset can be achieved by optimizing some criterions defined on the graph .\nFor example Spectral Clustering is one kind of the most representative graph-based clustering approaches , it generally aims to optimize some cut value ( e.g. Normalized Cut [ 22 ] , Ratio Cut [ 7 ] , Min-Max Cut [ 11 ] ) defined on an undirected graph .\nAfter some relaxations , these criterions can usually be optimized via eigen-decompositions , which is guaranteed to be global optimal .\nIn this way , spectral clustering efficiently avoids the problems of the traditional partitioning methods as we introduced in last paragraph .\nIn this paper , we propose a novel document clustering algorithm that inherits the superiority of spectral clustering , i.e. the final cluster results can also be obtained by exploit the eigen-structure of a symmetric matrix .\nHowever , unlike spectral clustering , which just enforces a smoothness constraint on the data labels over the whole data manifold [ 2 ] , our method first construct a regularized linear label predictor for each data point from its neighborhood as in [ 25 ] , and then combine the results of all these local label predictors with a global label smoothness regularizer .\nSo we call our method Clustering with Local and Global Regularization ( CLGR ) .\nThe idea of incorporating both local and global information into label prediction is inspired by the recent works on semi-supervised learning [ 31 ] , and our experimental evaluations on several real document datasets show that CLGR performs better than many state-of-the-art clustering methods .\nThe rest of this paper is organized as follows : in section 2 we will introduce our CLGR algorithm in detail .\nThe experimental results on several datasets are presented in section 3 , followed by the conclusions and discussions in section 4 .\n2 .\nTHE PROPOSED ALGORITHM\nIn this section , we will introduce our Clustering with Local and Global Regularization ( CLGR ) algorithm in detail .\nFirst let 's see the how the documents are represented throughout this paper .\n2.1 Document Representation\nIn our work , all the documents are represented by the weighted term-frequency vectors .\nLet W = { w1 , w2 , \u00b7 \u00b7 \u00b7 , wm } be the complete vocabulary set of the document corpus ( which is preprocessed by the stopwords removal and words stemming operations ) .\nThe term-frequency vector xi of document di is defined as\nwhere tik is the term frequency of wk W , n is the size of the document corpus , idfk is the number of documents that contain word wk .\nIn this way , xi is also called the TFIDF representation of document di .\nFurthermore , we also normalize each xi ( 1 S i S n ) to have a unit length , so that each document is represented by a normalized TF-IDF vector .\n2.2 Local Regularization\nAs its name suggests , CLGR is composed of two parts : local regularization and global regularization .\nIn this subsection we will introduce the local regularization part in detail .\n2.2.1 Motivation\nAs we know that clustering is one type of learning techniques , it aims to organize the dataset in a reasonable way .\nGenerally speaking , learning can be posed as a problem of function estimation , from which we can get a good classification function that will assign labels to the training dataset and even the unseen testing dataset with some cost minimized [ 24 ] .\nFor example , in the two-class classification scenario1 ( in which we exactly know the label of each document ) , a linear classifier with least square fit aims to learn a column vector w such that the squared cost\nis minimized , where yi { +1 , \u2212 1 } is the label of xi .\nBy taking J / w = 0 , we get the solution\nwhere X = [ x1 , x2 , \u00b7 \u00b7 \u00b7 , xn ] is an m \u00d7 n document matrix , y = [ y1 , y2 , \u00b7 \u00b7 \u00b7 , yn ] T is the label vector .\nThen for a test document t , we can determine its label by l = sign ( w * T u ) , ( 4 ) where sign ( \u00b7 ) is the sign function .\nA natural problem in Eq .\n( 3 ) is that the matrix XXT may be singular and thus not invertable ( e.g. when m n ) .\nTo avoid such a problem , we can add a regularization term and minimize the following criterion\nwhere is a regularization parameter .\nThen the optimal solution that minimize J' is given by\nwhere I is an m \u00d7 m identity matrix .\nIt has been reported that the regularized linear classifier can achieve very good results on text classification problems [ 29 ] .\nHowever , despite its empirical success , the regularized linear classifier is on earth a global classifier , i.e. w * is estimated using the whole training set .\nAccording to [ 24 ] , this may not be a smart idea , since a unique w * may not be good enough for predicting the labels of the whole input space .\nIn order to get better predictions , [ 6 ] proposed to train classifiers locally and use them to classify the testing points .\nFor example , a testing point will be classified by the local classifier trained using the training points located in the vicinity 1In the following discussions we all assume that the documents coming from only two classes .\nThe generalizations of our method to multi-class cases will be discussed in section 2.5 .\nof it .\nAlthough this method seems slow and stupid , it is reported that it can get better performances than using a unique global classifier on certain tasks [ 6 ] .\n2.2.2 Constructing the Local Regularized Predictors\nInspired by their success , we proposed to apply the local learning algorithms for clustering .\nThe basic idea is that , for each document vector xi ( 1 S i S n ) , we train a local label predictor based on its k-nearest neighborhood Ni , and then use it to predict the label of xi .\nFinally we will combine all those local predictors by minimizing the sum of their prediction errors .\nIn this subsection we will introduce how to construct those local predictors .\nDue to the simplicity and effectiveness of the regularized linear classifier that we have introduced in section 2.2.1 , we choose it to be our local label predictor , such that for each document xi , the following criterion is minimized\nwhere ni = | Ni | is the cardinality of Ni , and qj is the cluster membership of xj .\nThen using Eq .\n( 6 ) , we can get the optimal solution is\nwhere Xi = [ xi1 , xi2 , \u00b7 \u00b7 \u00b7 , xini ] , and we use xik to denote the k-th nearest neighbor of xi .\nqi = [ qi1 , qi2 , \u00b7 \u00b7 \u00b7 , qini ] T with qik representing the cluster assignment of xik .\nThe problem here is that XiXTi is an m \u00d7 m matrix with m ni , i.e. we should compute the inverse of an m \u00d7 m matrix for every document vector , which is computationally prohibited .\nFortunately , we have the following theorem : Theorem 1 .\nw i in Eq .\n( 8 ) can be rewritten as\nwhere Ii is an ni \u00d7 ni identity matrix .\nUsing theorem 1 , we only need to compute the inverse of an ni \u00d7 ni matrix for every document to train a local label predictor .\nMoreover , for a new testing point u that falls into Ni , we can classify it by the sign of\nThis is an attractive expression since we can determine the cluster assignment of u by using the inner-products between the points in { u Ni } , which suggests that such a local regularizer can easily be kernelized [ 21 ] as long as we define a proper kernel function .\n2.2.3 Combining the Local Regularized Predictors\nAfter all the local predictors having been constructed , we will combine them together by minimizing\nwhich stands for the sum of the prediction errors for all the local predictors .\nCombining Eq .\n( 10 ) with Eq .\n( 6 ) , we can get\nwhere q = [ q1 , q2 , \u00b7 \u00b7 \u00b7 , qn ] T , and the P is an n \u00d7 n matrix constructing in the following way .\nLet ( ) \u2212 1 = xTi Xi XTi Xi + iniIi , then aij , if xj Ni Pij = 0 , otherwise , ( 12 ) where Pij is the ( i , j ) - th entry of P , and aij represents the j-th entry of ai .\nTill now we can write the criterion of clustering by combining locally regularized linear label predictors Jl in an explicit mathematical form , and we can minimize it directly using some standard optimization techniques .\nHowever , the results may not be good enough since we only exploit the local informations of the dataset .\nIn the next subsection , we will introduce a global regularization criterion and combine it with Jl , which aims to find a good clustering result in a local-global way .\n2.3 Global Regularization\nIn data clustering , we usually require that the cluster assignments of the data points should be sufficiently smooth with respect to the underlying data manifold , which implies ( 1 ) the nearby points tend to have the same cluster assignments ; ( 2 ) the points on the same structure ( e.g. submanifold or cluster ) tend to have the same cluster assignments [ 31 ] .\nWithout the loss of generality , we assume that the data points reside ( roughly ) on a low-dimensional manifold M2 , and q is the cluster assignment function defined on M , i.e. 2We believe that the text data are also sampled from some low dimensional manifold , since it is impossible for them to\nfor x M , q ( x ) returns the cluster membership of x .\nThe smoothness of q over M can be calculated by the following Dirichlet integral [ 2 ] f\nwhere the gradient q is a vector in the tangent space TMx , and the integral is taken with respect to the standard measure on M .\nIf we restrict the scale of q by q , q M = 1 ( where \u00b7 , \u00b7 M is the inner product induced on M ) , then it turns out that finding the smoothest function minimizing D [ q ] reduces to finding the eigenfunctions of the Laplace Beltrami operator L , which is defined as Lq A \u2212 div q , ( 14 ) where div is the divergence of a vector field .\nGenerally , the graph can be viewed as the discretized form of manifold .\nWe can model the dataset as an weighted undirected graph as in spectral clustering [ 22 ] , where the graph nodes are just the data points , and the weights on the edges represent the similarities between pairwise points .\nThen it can be shown that minimizing Eq .\n( 13 ) corresponds to minimizing\nwhere q = [ q1 , q2 , \u00b7 \u00b7 \u00b7 , qn ] T with qi = q ( xi ) , L is the graph Laplacian with its ( i , j ) - th entry di \u2212 wii , if i = j \u2212 wij , if xi and xj are adjacent ( 16 ) 0 , otherwise , where di = Ej wij is the degree of xi , wij is the similarity between xi and xj .\nIf xi and xj are adjacent3 , wij is usually computed in the following way\nwhere v is a dataset dependent parameter .\nIt is proved that under certain conditions , such a form of wij to determine the weights on graph edges leads to the convergence of graph Laplacian to the Laplace Beltrami operator [ 3 ] [ 18 ] .\nIn summary , using Eq .\n( 15 ) with exponential weights can effectively measure the smoothness of the data assignments with respect to the intrinsic data manifold .\nThus we adopt it as a global regularizer to punish the smoothness of the predicted data assignments .\n2.4 Clustering with Local and Global Regularization\nCombining the contents we have introduced in section 2.2 and section 2.3 we can derive the clustering criterion is minq J = Jl + AJg = Pq \u2212 q 2 + AqT Lq\nwhere P is defined as in Eq .\n( 12 ) , and A is a regularization parameter to trade off Jl and Jg .\nHowever , the discrete fill in the whole high-dimensional sample space .\nAnd it has been shown that the manifold based methods can achieve good results on text classification tasks [ 31 ] .\n3In this paper , we define xi and xj to be adjacent if xi N ( xj ) or xj N ( xi ) .\nconstraint of pi makes the problem an NP hard integer programming problem .\nA natural way for making the problem solvable is to remove the constraint and relax qi to be continuous , then the objective that we aims to minimize becomes\nand we further add a constraint qT q = 1 to restrict the scale of q .\nThen our objective becomes\nUsing the Lagrangian method , we can derive that the optimal solution q corresponds to the smallest eigenvector of the matrix M = ( P \u2212 I ) T ( P \u2212 I ) + AL , and the cluster assignment of xi can be determined by the sign of qi , i.e. xi will be classified as class one if qi > 0 , otherwise it will be classified as class 2 .\n2.5 Multi-Class CLGR\nIn the above we have introduced the basic framework of Clustering with Local and Global Regularization ( CLGR ) for the two-class clustering problem , and we will extending it to multi-class clustering in this subsection .\nFirst we assume that all the documents belong to C classes indexed by L = { 1 , 2 , \u00b7 \u00b7 \u00b7 , C } .\nqc is the classification function for class c ( 1 S c S C ) , such that qc ( xi ) returns the confidence that xi belongs to class c .\nOur goal is to obtain the value of qc ( xi ) ( 1 S c S C , 1 S i S n ) , and the cluster assignment of xi can be determined by { qc ( xi ) } Cc = 1 using some proper discretization methods that we will introduce later .\nTherefore , in this multi-class case , for each document xi ( 1 S i S n ) , we will construct C locally linear regularized label predictors whose normal vectors are ( ) \u2212 1 wc * i = Xi XTi Xi + AiniIi qci ( 1 S c S C ) , ( 21 ) where Xi = [ xi1 , xi2 , \u00b7 \u00b7 \u00b7 , xini ] with xik being the k-th neighbor of xi , and qci = [ qci1 , qci2 , \u00b7 \u00b7 \u00b7 , qcini ] T with qcik = qc ( xik ) .\nThen ( wc * i ) T xi returns the predicted confidence of xi belonging to class c. Hence the local prediction error for class c can be defined as\nAnd the total local prediction error becomes\nAs in Eq .\n( 11 ) , we can define an n \u00d7 n matrix P ( see Eq .\n( 12 ) ) and rewrite Jl as Pqc \u2212 qc 2 .\n( 24 ) Similarly we can define the global smoothness regularizer\nin multi-class case as ( qc ) T Lqc .\n( 25 ) Then the criterion to be minimized for CLGR in multi-class case becomes\nwhere Q = [ q1 , q2 , \u00b7 \u00b7 \u00b7 , qc ] is an n \u00d7 c matrix , and trace ( \u00b7 ) returns the trace of a matrix .\nThe same as in Eq .\n( 20 ) , we also add the constraint that QT Q = I to restrict the scale of Q .\nThen our optimization problem becomes\nFrom the Ky Fan theorem [ 28 ] , we know the optimal solution of the above problem is\nwhere q * k ( 1 S k S C ) is the eigenvector corresponds to the k-th smallest eigenvalue of matrix ( P \u2212 I ) T ( P \u2212 I ) + AL , and R is an arbitrary C \u00d7 C matrix .\nSince the values of the entries in Q * is continuous , we need to further discretize Q * to get the cluster assignments of all the data points .\nThere are mainly two approaches to achieve this goal : 1 .\nAs in [ 20 ] , we can treat the i-th row of Q as the embedding of xi in a C-dimensional space , and apply some traditional clustering methods like kmeans to clustering these embeddings into C clusters .\n2 .\nSince the optimal Q * is not unique ( because of the existence of an arbitrary matrix R ) , we can pursue an optimal R that will rotate Q * to an indication matrix4 .\nThe detailed algorithm can be referred to [ 26 ] .\nThe detailed algorithm procedure for CLGR is summarized in table 1 .\n3 .\nEXPERIMENTS\nIn this section , experiments are conducted to empirically compare the clustering results of CLGR with other 8 representitive document clustering algorithms on 5 datasets .\nFirst we will introduce the basic informations of those datasets .\n3.1 Datasets\nWe use a variety of datasets , most of which are frequently used in the information retrieval research .\nTable 2 summarizes the characteristics of the datasets .\n4Here an indication matrix T is a n \u00d7 c matrix with its ( i , j ) th entry Tij { 0 , 1 } such that for each row of Q * there is only one 1 .\nThen the xi can be assigned to the j-th cluster such that j = argjQ * ij = 1 .\nTable 1 : Clustering with Local and Global Regularization ( CLGR )\nInput :\n1 .\nDataset X = { xi } ni = 1 ; 2 .\nNumber of clusters C ; 3 .\nSize of the neighborhood K ; 4 .\nLocal regularization parameters { ai } n i = 1 ; 5 .\nGlobal regularization parameter A ; Output : The cluster membership of each data point .\nProcedure : 1 .\nConstruct the K nearest neighborhoods for each data point ; 2 .\nConstruct the matrix P using Eq .\n( 12 ) ; 3 .\nConstruct the Laplacian matrix L using Eq .\n( 16 ) ; 4 .\nConstruct the matrix M = ( P \u2212 I ) T ( P \u2212 I ) + AL ; 5 .\nDo eigenvalue decomposition on M , and construct the matrix Q * according to Eq .\n( 28 ) ; 6 .\nOutput the cluster assignments of each data point by properly discretize Q * .\nTable 2 : Descriptions of the document datasets\nCSTR .\nThis is the dataset of the abstracts of technical reports published in the Department of Computer Science at a university .\nThe dataset contained 476 abstracts , which were divided into four research areas : Natural Language Processing ( NLP ) , Robotics/Vision , Systems , and Theory .\nWebKB .\nThe WebKB dataset contains webpages gathered from university computer science departments .\nThere are about 8280 documents and they are divided into 7 categories : student , faculty , staff , course , project , department and other .\nThe raw text is about 27MB .\nAmong these 7 categories , student , faculty , course and project are four most populous entity-representing categories .\nThe associated subset is typically called WebKB4 .\nReuters .\nThe Reuters-21578 Text Categorization Test collection contains documents collected from the Reuters newswire in 1987 .\nIt is a standard text categorization benchmark and contains 135 categories .\nIn our experiments , we use a subset of the data collection which includes the 10 most frequent categories among the 135 topics and we call it Reuters-top 10 .\nWebACE .\nThe WebACE dataset was from WebACE project and has been used for document clustering [ 17 ] [ 5 ] .\nThe WebACE dataset contains 2340 documents consisting news articles from Reuters new service via the Web in October 1997 .\nThese documents are divided into 20 classes .\nNews4 .\nThe News4 dataset used in our experiments are selected from the famous 20-newsgroups dataset5 .\nThe topic rec containing autos , motorcycles , baseball and hockey was selected from the version 20news-18828 .\nThe News4 dataset contains 3970 document vectors .\nTo pre-process the datasets , we remove the stop words using a standard stop list , all HTML tags are skipped and all header fields except subject and organization of the posted articles are ignored .\nIn all our experiments , we first select the top 1000 words by mutual information with class labels .\n3.2 Evaluation Metrics\nIn the experiments , we set the number of clusters equal to the true number of classes C for all the clustering algorithms .\nTo evaluate their performance , we compare the clusters generated by these algorithms with the true classes by computing the following two performance measures .\nClustering Accuracy ( Acc ) .\nThe first performance measure is the Clustering Accuracy , which discovers the one-toone relationship between clusters and classes and measures the extent to which each cluster contained data points from the corresponding class .\nIt sums up the whole matching degree between all pair class-clusters .\nClustering accuracy can be computed as :\nCk , Lm\nwhere Ck denotes the k-th cluster in the final results , and Lm is the true m-th class .\nT ( Ck , Lm ) is the number of entities which belong to class m are assigned to cluster k. Accuracy computes the maximum sum of T ( Ck , Lm ) for all pairs of clusters and classes , and these pairs have no overlaps .\nThe greater clustering accuracy means the better clustering performance .\nNormalized Mutual Information ( NMI ) .\nAnother evaluation metric we adopt here is the Normalized Mutual Information NMI [ 23 ] , which is widely used for determining the quality of clusters .\nFor two random variable X and Y , the NMI is defined as :\nwhere I ( X , Y ) is the mutual information between X and Y , while H ( X ) and H ( Y ) are the entropies of X and Y respectively .\nOne can see that NMI ( X , X ) = 1 , which is the maximal possible value of NMI .\nGiven a clustering result , the NMI in Eq .\n( 30 ) is estimated as where nk denotes the number of data contained in the cluster Ck ( 1 S k S C ) , \u02c6nm is the number of data belonging to the m-th class ( 1 S m S C ) , and nk , m denotes the number of data that are in the intersection between the cluster Ck and the m-th class .\nThe value calculated in Eq .\n( 31 ) is used as a performance measure for the given clustering result .\nThe larger this value , the better the clustering performance .\n3.3 Comparisons\nWe have conducted comprehensive performance evaluations by testing our method and comparing it with 8 other representative data clustering methods using the same data corpora .\nThe algorithms that we evaluated are listed below .\n1 .\nTraditional k-means ( KM ) .\n2 .\nSpherical k-means ( SKM ) .\nThe implementation is based on [ 9 ] .\n3 .\nGaussian Mixture Model ( GMM ) .\nThe implementation is based on [ 16 ] .\n4 .\nSpectral Clustering with Normalized Cuts ( Ncut ) .\nThe implementation is based on [ 26 ] , and the variance of the Gaussian similarity is determined by Local Scaling [ 30 ] .\nNote that the criterion that Ncut aims to minimize is just the global regularizer in our CLGR algorithm except that Ncut used the normalized Laplacian .\n5 .\nClustering using Pure Local Regularization ( CPLR ) .\nIn this method we just minimize Jl ( defined in Eq .\n( 24 ) ) , and the clustering results can be obtained by doing eigenvalue decomposition on matrix ( I \u2212 P ) T ( I \u2212 P ) with some proper discretization methods .\n6 .\nAdaptive Subspace Iteration ( ASI ) .\nThe implementation is based on [ 14 ] .\n7 .\nNonnegative Matrix Factorization ( NMF ) .\nThe implementation is based on [ 27 ] .\n8 .\nTri-Factorization Nonnegative Matrix Factorization ( TNMF ) [ 12 ] .\nThe implementation is based on [ 15 ] .\nFor computational efficiency , in the implementation of CPLR and our CLGR algorithm , we have set all the local regularization parameters { i } n i = 1 to be identical , which is set by grid search from { 0.1 , 1 , 10 } .\nThe size of the k-nearest neighborhoods is set by grid search from { 20 , 40 , 80 } .\nFor the CLGR method , its global regularization parameter is set by grid search from { 0.1 , 1 , 10 } .\nWhen constructing the global regularizer , we have adopted the local scaling method [ 30 ] to construct the Laplacian matrix .\nThe final discretization method adopted in these two methods is the same as in [ 26 ] , since our experiments show that using such method can achieve better results than using kmeans based methods as in [ 20 ] .\n3.4 Experimental Results\nThe clustering accuracies comparison results are shown in table 3 , and the normalized mutual information comparison results are summarized in table 4 .\nFrom the two tables we mainly observe that :\n1 .\nOur CLGR method outperforms all other document clustering methods in most of the datasets ; 2 .\nFor document clustering , the Spherical k-means method usually outperforms the traditional k-means clustering method , and the GMM method can achieve competitive results compared to the Spherical k-means method ; 3 .\nThe results achieved from the k-means and GMM type algorithms are usually worse than the results achieved from Spectral Clustering .\nSince Spectral Clustering can be viewed as a weighted version of kernel k-means , it can obtain good results the data clusters are arbitrarily shaped .\nThis corroborates that the documents vectors are not regularly distributed ( spherical or elliptical ) .\n4 .\nThe experimental comparisons empirically verify the equivalence between NMF and Spectral Clustering , which\nTable 3 : Clustering accuracies of the various methods\nTable 4 : Normalized mutual information results of the various methods\nhas been proved theoretically in [ 10 ] .\nIt can be observed from the tables that NMF and Spectral Clustering usually lead to similar clustering results .\n5 .\nThe co-clustering based methods ( TNMF and ASI ) can usually achieve better results than traditional purely document vector based methods .\nSince these methods perform an implicit feature selection at each iteration , provide an adaptive metric for measuring the neighborhood , and thus tend to yield better clustering results .\n6 .\nThe results achieved from CPLR are usually better than the results achieved from Spectral Clustering , which supports Vapnik 's theory [ 24 ] that sometimes local learning algorithms can obtain better results than global learning algorithms .\nBesides the above comparison experiments , we also test the parameter sensibility of our method .\nThere are mainly two sets of parameters in our CLGR algorithm , the local and global regularization parameters ( { Ai } n i = 1 and A , as we have said in section 3.3 , we have set all Ai 's to be identical to A * in our experiments ) , and the size of the neighborhoods .\nTherefore we have also done two sets of experiments : 1 .\nFixing the size of the neighborhoods , and testing the clustering performance with varying A * and A .\nIn this set of experiments , we find that our CLGR algorithm can achieve good results when the two regularization parameters are neither too large nor too small .\nTypically our method can achieve good results when A * and A are around 0.1 .\nFigure 1 shows us such a testing example on the WebACE dataset .\n2 .\nFixing the local and global regularization parameters , and testing the clustering performance with different\nFigure 1 : Parameter sensibility testing results on the WebACE dataset with the neighborhood size fixed to 20 , and the x-axis and y-axis represents the loge value of A * and A.\nsizes of neighborhoods .\nIn this set of experiments , we find that the neighborhood with a too large or too small size will all deteriorate the final clustering results .\nThis can be easily understood since when the neighborhood size is very small , then the data points used for training the local classifiers may not be sufficient ; when the neighborhood size is very large , the trained classifiers will tend to be global and can not capture the typical local characteristics .\nFigure 2 shows us a testing example on the WebACE dataset .\nTherefore , we can see that our CLGR algorithm ( 1 ) can achieve satisfactory results and ( 2 ) is not very sensitive to the choice of parameters , which makes it practical in real world applications .\n4 .\nCONCLUSIONS AND FUTURE WORKS\nIn this paper , we derived a new clustering algorithm called clustering with local and global regularization .\nOur method preserves the merit of local learning algorithms and spectral clustering .\nOur experiments show that the proposed algorithm outperforms most of the state of the art algorithms on many benchmark datasets .\nIn the future , we will focus on the parameter selection and acceleration issues of the CLGR algorithm ."} {"id": "J-8", "title": "", "abstract": "", "keyphrases": ["cost share connect game", "player number", "singl sourc and sink", "singl sourc multipl sink", "multi sourc and sink", "edg cost", "fair connect game", "gener connect game", "graph topolog", "strong equilibrium", "coalit", "specif cost", "extens parallel graph", "optim solut", "game theori", "nash equilibrium", "anarchi price", "strong price of anarchi", "network design", "cost share game"], "prmu": [], "lvl-1": "Strong Equilibrium in Cost Sharing Connection Games\u2217 Amir Epstein School of Computer Science Tel-Aviv University Tel-Aviv, 69978, Israel amirep@tau.ac.il Michal Feldman School of Computer Science The Hebrew University of Jerusalem Jerusalem, 91904, Israel mfeldman@cs.huji.ac.il Yishay Mansour School of Computer Science Tel-Aviv University Tel-Aviv, 69978, Israel mansour@tau.ac.il ABSTRACT In this work we study cost sharing connection games, where each player has a source and sink he would like to connect, and the cost of the edges is either shared equally (fair connection games) or in an arbitrary way (general connection games).\nWe study the graph topologies that guarantee the existence of a strong equilibrium (where no coalition can improve the cost of each of its members) regardless of the specific costs on the edges.\nOur main existence results are the following: (1) For a single source and sink we show that there is always a strong equilibrium (both for fair and general connection games).\n(2) For a single source multiple sinks we show that for a series parallel graph a strong equilibrium always exists (both for fair and general connection games).\n(3) For multi source and sink we show that an extension parallel graph always admits a strong equilibrium in fair connection games.\nAs for the quality of the strong equilibrium we show that in any fair connection games the cost of a strong equilibrium is \u0398(log n) from the optimal solution, where n is the number of players.\n(This should be contrasted with the \u2126(n) price of anarchy for the same setting.)\nFor single source general connection games and single source single sink fair connection games, we show that a strong equilibrium is always an optimal solution.\nCategories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems; F.2.0 [Analysis of Algorithms and Problem Complexity]: General; J.4 [Social and Behavioral Sciences]: Economics; K.4.4 [Electronic Commerce]: Payment schemes General Terms Theory, Economics, Algorithms 1.\nINTRODUCTION Computational game theory has introduced the issue of incentives to many of the classical combinatorial optimization problems.\nThe view that the demand side is many times not under the control of a central authority that optimizes the global performance, but rather under the control of individuals with different incentives, has led already to many important insights.\nConsider classical routing and transportation problems such as multicast or multi-commodity problems, which are many times viewed as follows.\nWe are given a graph with edge costs and connectivity demands between nodes, and our goal is to find a minimal cost solution.\nThe classical centralized approach assumes that all the individual demands can both be completely coordinated and have no individual incentives.\nThe game theory point of view would assume that each individual demand is controlled by a player that optimizes its own utility, and the resulting outcome could be far from the optimal solution.\nWhen considering individual incentives one needs to discuss the appropriate solution concept.\nMuch of the research in computational game theory has focused on the classical Nash equilibrium as the primary solution concept.\nIndeed Nash equilibrium has many benefits, and most importantly it always exists (in mixed strategies).\nHowever, the solution concept of Nash equilibrium is resilient only to unilateral deviations, while in reality, players may be able to coordinate their actions.\nA strong equilibrium [4] is a state from which no coalition (of any size) can deviate and improve the utility of every member of the coalition (while possibly lowering the utility 84 of players outside the coalition).\nThis resilience to deviations by coalitions of the players is highly attractive, and one can hope that once a strong equilibrium is reached it is highly likely to sustain.\nFrom a computational game theory point of view, an additional benefit of a strong equilibrium is that it has a potential to reduce the distance between the optimal solution and the solution obtained as an outcome of selfish behavior.\nThe strong price of anarchy (SPoA), introduced in [1], is the ratio between the cost of the worst strong equilibrium and the cost of an optimal solution.\nObviously, SPoA is meaningful only in those cases where a strong equilibrium exists.\nA major downside of strong equilibrium is that most games do not admit any strong equilibrium.\nEven simple classical games like the prisoner``s dilemma do not posses any strong equilibrium (which is also an example of a congestion game that does not posses a strong equilibrium1 ).\nThis unfortunate fact has reduced the concentration in strong equilibrium, despite its highly attractive properties.\nYet, [1] have identified two broad families of games, namely job scheduling and network formation, where a strong equilibrium always exists and the SPoA is significantly lower than the price of anarchy (which is the ratio between the worst Nash equilibrium and the optimal solution [15, 18, 5, 6]).\nIn this work we concentrate on cost sharing connection games, introduced by [3, 2].\nIn such a game, there is an underlying directed graph with edge costs, and individual users have connectivity demands (between a source and a sink).\nWe consider two models.\nThe fair cost connection model [2] allows each player to select a path from the source to the sink2 .\nIn this game the cost of an edge is shared equally between all the players that selected the edge, and the cost of the player is the sum of its costs on the edges it selected.\nThe general connection game [3] allows each player to offer prices for edges.\nIn this game an edge is bought if the sum of the offers at least covers its cost, and the cost of the player is the sum of its offers on the bought edges (in both games we assume that the player has to guarantee the connectivity between its source and sink).\nIn this work we focus on two important issues.\nThe first one is identifying under what conditions the existence of a strong equilibrium is guaranteed, and the second one is the quality of the strong equilibria.\nFor the existence part, we identify families of graph topologies that possess some strong equilibrium for any assignment of edge costs.\nOne can view this separation between the graph topology and the edge costs, as a separation between the underlying infrastructure and the costs the players observe to purchase edges.\nWhile one expects the infrastructure to be stable over long periods of time, the costs the players observe can be easily modified over short time periods.\nSuch a topological characterization of the underlying infrastructure provides a network designer topological conditions that will ensure stability in his network.\nOur results are as follows.\nFor the single commodity case (all the players have the same source and sink), there is a strong equilibrium in any graph (both for fair and general connection games).\nMoreover, the strong equilibrium is also 1 while any congestion game is known to admit at least one Nash equilibrium in pure strategies [16].\n2 The fair cost sharing scheme is also attractive from a mechanism design point of view, as it is a strategyproof costsharing mechanism [14].\nthe optimal solution (namely, the players share a shortest path from the common source to the common sink).\nFor the case of a single source and multiple sinks (for example, in a multicast tree), we show that in a fair connection game there is a strong equilibrium if the underlying graph is a series parallel graph, and we show an example of a nonseries parallel graph that does not have a strong equilibrium.\nFor the case of multi-commodity (multi sources and sinks), we show that in a fair connection game if the graph is an extension parallel graph then there is always a strong equilibrium, and we show an example of a series parallel graph that does not have a strong equilibrium.\nAs far as we know, we are the first to provide a topological characterization for equilibrium existence in multi-commodity and single-source network games.\nFor any fair connection game we show that if there exists a strong equilibrium it is at most a factor of \u0398(log n) from the optimal solution, where n is the number of players.\nThis should be contrasted with the \u0398(n) bound that exists for the price of anarchy [2].\nFor single source general connection games, we show that any series parallel graph possesses a strong equilibrium, and we show an example of a graph that does not have a strong equilibrium.\nIn this case we also show that any strong equilibrium is optimal.\nRelated work Topological characterizations for single-commodity network games have been recently provided for various equilibrium properties, including equilibrium existence [12, 7, 8], equilibrium uniqueness [10] and equilibrium efficiency [17, 11].\nThe existence of pure Nash equilibrium in single-commodity network congestion games with player-specific costs or weights was studied in [12].\nThe existence of strong equilibrium was studied in both utility-decreasing (e.g., routing) and utility-increasing (e.g., fair cost-sharing) congestion games.\n[7, 8] have provided a full topological characterization for a SE existence in single-commodity utility-decreasing congestion games, and showed that a SE always exists if and only if the underlying graph is extension-parallel.\n[19] have shown that in single-commodity utility-increasing congestion games, the topological characterization is essentially equivalent to parallel links.\nIn addition, they have shown that these results hold for correlated strong equilibria as well (in contrast to the decreasing setting, where correlated strong equilibria might not exist at all).\nWhile the fair cost sharing games we study are utility increasing network congestion games, we derive a different characterization than [19] due to the different assumptions regarding the players'' actions.3 2.\nMODEL 2.1 Game Theory definitions A game \u039b =< N, (\u03a3i), (ci) > has a finite set N = {1, ... , n} of players.\nPlayer i \u2208 N has a set \u03a3i of actions, the joint action set is \u03a3 = \u03a31 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 \u03a3n and a joint action S \u2208 \u03a3 is also called a profile.\nThe cost function of player i is 3 In [19] they allow to restrict some players from using certain links, even though the links exist in the graph, while we do not allow this, and assume that the available strategies for players are fully represented by the underlying graph.\n85 ci : \u03a3 \u2192 R+ , which maps the joint action S \u2208 \u03a3 to a non-negative real number.\nLet S = (S1, ... , Sn) denote the profile of actions taken by the players, and let S\u2212i = (S1, ... , Si\u22121, Si+1, ... , Sn) denote the profile of actions taken by all players other than player i. Note that S = (Si, S\u2212i).\nThe social cost of a game \u039b is the sum of the costs of the players, and we denote by OPT(\u039b) the minimal social cost of a game \u039b.\ni.e., OPT(\u039b) = minS\u2208\u03a3 cost\u039b(S), where cost\u039b(S) = i\u2208N ci(S).\nA joint action S \u2208 \u03a3 is a pure Nash equilibrium if no player i \u2208 N can benefit from unilaterally deviating from his action to another action, i.e., \u2200i \u2208 N \u2200Si \u2208 \u03a3i : ci(S\u2212i, Si) \u2265 ci(S).\nWe denote by NE(\u039b) the set of pure Nash equilibria in the game \u039b.\nResilience to coalitions: A pure deviation of a set of players \u0393 \u2282 N (also called coalition) specifies an action for each player in the coalition, i.e., \u03b3 \u2208 \u00d7i\u2208\u0393\u03a3i.\nA joint action S \u2208 \u03a3 is not resilient to a pure deviation of a coalition \u0393 if there is a pure joint action \u03b3 of \u0393 such that ci(S\u2212\u0393, \u03b3) < ci(S) for every i \u2208 \u0393 (i.e., the players in the coalition can deviate in such a way that each player in the coalition reduces its cost).\nA pure Nash equilibrium S \u2208 \u03a3 is a k-strong equilibrium, if there is no coalition \u0393 of size at most k, such that S is not resilient to a pure deviation by \u0393.\nWe denote by k-SE(\u039b) the set of k-strong equilibria in the game \u039b.\nWe denote by SE(\u039b) the set of n-strong equilibria, and call S \u2208 SE(\u039b) a strong equilibrium (SE).\nNext we define the Price of Anarchy [9], Price of Stability [2], and their extension to Strong Price of Anarchy and Strong Price of Stability.\nof anarchy (k-SPoA) for the game \u039b.\nThe Price of Anarchy (PoA) is the ratio between the maximal cost of a pure Nash equilibrium (assuming one exists) and the social optimum, i.e., maxS\u2208NE(\u039b) cost\u039b(S) /OPT(\u039b).\nSimilarly, the Price of Stability (PoS) is the ratio between the minimal cost of a pure Nash equilibrium and the social optimum, i.e., minS\u2208NE(\u039b) cost\u039b(S)/OPT(\u039b).\nThe k-Strong Price of Anarchy (k-SPoA) is the ratio between the maximal cost of a k-strong equilibrium (assuming one exists) and the social optimum, i.e., maxS\u2208k-SE(\u039b) cost\u039b(S) /OPT(\u039b).\nThe SPoA is the n-SPoA.\nSimilarly, the Strong Price of Stability (SPoS) is the ratio between the minimal cost of a pure strong equilibrium and the social optimum, i.e., minS\u2208SE(\u039b) cost\u039b(S)/OPT(\u039b).\nNote that both k-SPoA and SPoS are defined only if some strong equilibrium exists.\n2.2 Cost Sharing Connection Games A cost sharing connection game has an underlying directed graph G = (V, E) where each edge e \u2208 E has an associated cost ce \u2265 04 .\nIn a connection game each player i \u2208 N has an associated source si and sink ti.\nIn a fair connection game the actions \u03a3i of player i include all the paths from si to ti.\nThe cost of each edge is shared equally by the set of all players whose paths contain it.\nGiven a joint action, the cost of a player is the sum of his costs on the edges it selected.\nMore formally, the cost function of each player on an edge e, in a joint action S, is fe(ne(S)) = ce ne(S) , where ne(S) is the number of players that selected a path containing edge e in S.\nThe cost of player i, when selecting path Qi \u2208 \u03a3i is ci(S) = e\u2208Qi fe(ne(S)).\n4 In some of the existence proofs, we assume that ce > 0 for simplicity.\nThe full version contains the complete proofs for the case ce \u2265 0.\nIn a general connection game the actions \u03a3i of player i is a payment vector pi, where pi(e) is how much player i is offering to contribute to the cost of edge e.5 Given a profile p, any edge e such that i pi(e) \u2265 ce is considered bought, and Ep denotes the set of bought edges.\nLet Gp = (V, Ep) denote the graph bought by the players for profile p = (p1, ... , pn).\nClearly, each player tries to minimize his total payment which is ci(p) = e\u2208Ep pi(e) if si is connected to ti in Gp, and infinity otherwise.6 We denote by c(p) = i ci(p) the total cost under the profile p. For a subgraph H of G we denote the total cost of the edges in H by c(H).\nA symmetric connection game implies that the source and sink of all the players are identical.\n(We also call a symmetric connection game a single source single sink connection game, or a single commodity connection game.)\nA single source connection game implies that the sources of all the players are identical.\nFinally, A multi commodity connection game implies that each player has its own source and sink.\n2.3 Extension Parallel and Series Parallel Directed Graphs Our directed graphs would be acyclic, and would have a source node (from which all nodes are reachable) and a sink node (which every node can reach).\nWe first define the following actions for composition of directed graphs.\n\u2022 Identification: The identification operation allows to collapse two nodes to one.\nMore formally, given graph G = (V, E) we define the identification of a node v1 \u2208 V and v2 \u2208 V forming a new node v \u2208 V as creating a new graph G = (V , E ), where V = V \u2212{v1, v2}\u222a{v} and E includes the edges of E where the edges of v1 and v2 are now connected to v. \u2022 Parallel composition: Given two directed graphs, G1 = (V1, E1) and G2 = (V2, E2), with sources s1 \u2208 V1 and s2 \u2208 V2 and sinks t1 \u2208 V1 and t2 \u2208 V2, respectively, we define a new graph G = G1||G2 as follows.\nLet G = (V1 \u222a V2, E1 \u222a E2) be the union graph.\nTo create G = G1||G2 we identify the sources s1 and s2, forming a new source node s, and identify the sinks t1 and t2, forming a new sink t. \u2022 Series composition: Given two directed graphs, G1 = (V1, E1) and G2 = (V2, E2), with sources s1 \u2208 V1 and s2 \u2208 V2 and sinks t1 \u2208 V1 and t2 \u2208 V2, respectively, we define a new graph G = G1 \u2192 G2 as follows.\nLet G = (V1 \u222a V2, E1 \u222a E2) be the union graph.\nTo create G = G1 \u2192 G2 we identify the vertices t1 and s2, forming a new vertex u.\nThe graph G has a source s = s1 and a sink t = t2.\n\u2022 Extension composition : A series composition when one of the graphs, G1 or G2, is composed of a single directed edge is an extension composition, and we denote it by G = G1 \u2192e G2.\nAn extension parallel graph (EPG) is a graph G consisting of either: (1) a single directed edge (s, t), (2) a graph G = G1||G2 or (3) a graph G = G1 \u2192e G2, where G1 and G2 are 5 We limit the players to select a path connecting si to ti and payment only on those edges.\n6 This implies that in equilibrium every player has its sink and source connected by a path in Gp.\n86 extension parallel graphs (and in the extension composition either G1 or G2 is a single edge.)\n.\nA series parallel graph (SPG) is a graph G consisting of either: (1) a single directed edge (s, t), (2) a graph G = G1||G2 or (3) a graph G = G1 \u2192 G2, where G1 and G2 are series parallel graphs.\nGiven a path Q and two vertices u, v on Q, we denote the subpath of Q from u to v by Qu,v.\nThe following lemma, whose proof appears in the full version, would be the main topological tool in the case of single source graph.\nLemma 2.1.\nLet G be an SPG with source s and sink t. Given a path Q, from s to t, and a vertex t , there exist a vertex y \u2208 Q, such that for any path Q from s to t , the path Q contains y and the paths Qy,t and Q are edge disjoint.\n(We call the vertex y the intersecting vertex of Q and t .)\n3.\nFAIR CONNECTION GAMES This section derives our results for fair connection games.\n3.1 Existence of Strong Equilibrium While it is known that every fair connection game possesses a Nash equilibrium in pure strategies [2], this is not necessarily the case for a strong equilibrium.\nIn this section, we study the existence of strong equilibrium in fair connection games.\nWe begin with a simple case, showing that every symmetric fair connection game possesses a strong equilibrium.\nTheorem 3.1.\nIn every symmetric fair connection game there exists a strong equilibrium.\nProof.\nLet s be the source and t be the sink of all the players.\nWe show that a profile S in which all the players choose the same shortest path Q (from the source s to the sink t ) is a strong equilibrium.\nSuppose by contradiction that S is not a SE.\nThen there is a coalition \u0393 that can deviate to a new profile S such that the cost of every player j \u2208 \u0393 decreases.\nLet Qj be a new path used by player j \u2208 \u0393.\nSince Q is a shortest path, it holds that c(Qj \\ (Q \u2229 Qj)) \u2265 c(Q \\ (Q \u2229 Qj)), for any path Qj.\nTherefore for every player j \u2208 \u0393 we have that cj(S ) \u2265 cj(S).\nHowever, this contradicts the fact that all players in \u0393 reduce their cost.\n(In fact, no player in \u0393 has reduced its cost.)\nWhile every symmetric fair connection game admits a SE, it does not hold for every fair connection game.\nIn what follows, we study the network topologies that admit a strong equilibrium for any assignment of edge costs, and give examples of topologies for which a strong equilibrium does not exist.\nThe following lemma, whose proof appears in the full version, plays a major role in our proofs of the existence of SE.\nLemma 3.2.\nLet \u039b be a fair connection game on a series parallel graph G with source s and sink t. Assume that player i has si = s and ti = t and that \u039b has some SE.\nLet S be a SE that minimizes the cost of player i (out of all SE), i.e., ci(S) = minT \u2208SE(\u039b) ci(T) and let S\u2217 be the profile that minimizes the cost of player i (out of all possible profiles), i.e., ci(S\u2217 ) = minT \u2208\u03a3 ci(T).\nThen, ci(S) = ci(S\u2217 ).\nThe next lemma considers parallel composition.\nLemma 3.3.\nLet \u039b be a fair connection game on graph G = G1||G2, where G1 and G2 are series parallel graphs.\nIf every fair connection game on the graphs G1 and G2 possesses a strong equilibrium, then the game \u039b possesses a strong equilibrium.\nProof.\nLet G1 = (V1, E1) and G2 = (V2, E2) have sources s1 and s2 and sinks t1 and t2, respectively.\nLet Ti be the set of players with an endpoint in Vi \\ {s, t}, for i \u2208 {1, 2}.\n(An endpoint is either a source or a sink of a player).\nLet T3 be the set of players j such that sj = s and tj = t. Let \u039b1 and \u039b2 be the original game on the respective graphs G1 and G2 with players T1 \u222a T3 and T2 \u222a T3, respectively.\nLet S and S be the SE in \u039b1 and \u039b2 that minimizes the cost of players in T3, respectively.\nAssume w.l.o.g. that ci(S ) \u2264 ci(S ) where player i \u2208 T3.\nIn addition, let \u039b2 be the game on the graph G2 with players T2 and let \u00afS be a SE in \u039b2.\nWe will show that the profile S = S \u222a \u00afS is a SE in \u039b.\nSuppose by contradiction that S is not a SE.\nThen, there is a coalition \u0393 that can deviate such that the cost of every player j \u2208 \u0393 decreases.\nBy Lemma 3.2 and the assumption that ci(S ) \u2264 ci(S ), a player j \u2208 T3 cannot improve his cost.\nTherefore, \u0393 \u2286 T1 \u222a T2.\nBut this is a contradiction to S being a SE in \u039b1 or \u00afS being a SE in \u039b2.\nThe following theorem considers the case of single source fair connection games.\nTheorem 3.4.\nEvery single source fair connection game on a series-parallel graph possesses a strong equilibrium.\nProof.\nWe prove the theorem by induction on the network size |V |.\nThe claim obviously holds if |V | = 2.\nWe show the claim for a series composition, i.e., G = G1 \u2192 G2, and for a parallel composition, i.e., G = G1||G2, where G1 = (V1, E1) and G2 = (V2, E2) are SPG``s with sources s1, s2, and sinks t1, t2, respectively.\nseries composition.\nLet G = G1 \u2192 G2.\nLet T1 be the set of players j such that tj \u2208 V1, and T2 be the set of players j such that tj \u2208 V2 \\ {s2}.\nLet \u039b1 and \u039b2 be the original game on the respective graphs G1 and G2 with players T1 \u222a T2 and T2, respectively.\nFor every player i \u2208 T2 with action Si in the game \u039b let Si \u2229E1 be his induced action in the game \u039b1, and let Si \u2229E2 be his induced action in the game \u039b2.\nLet S be a SE in \u039b1 that minimizes the cost of players in T2 (such a SE exists by the induction hypothesis and Lemma 3.2).\nLet S be any SE in \u039b2.\nWe will show that the profile S = S \u222a S is a SE in the game \u039b, i.e., for player j \u2208 T2 we use the profile Sj = Sj \u222a Sj .\nSuppose by contradiction that S is not a SE.\nThen, there is a coalition \u0393 that can deviate such that the cost of every player j \u2208 \u0393 decreases.\nNow, there are two cases: Case 1: \u0393 \u2286 T1.\nThis is a contradiction to S being a SE.\nCase 2: There exists a player j \u2208 \u0393 \u2229 T2.\nBy Lemma 3.2, player j cannot improve his cost in \u039b1 so the improvement is due to \u039b2.\nConsider the coalition \u0393 \u2229 T2, it would still improve its cost.\nHowever, this contradicts the fact that S is a SE in \u039b2.\nparallel composition.\nFollows from Lemma 3.3.\nWhile multi-commodity fair connection games on series parallel graphs do not necessarily possess a SE (see Theorem 3.6), fair connection games on extension parallel graphs always possess a strong equilibrium.\nTheorem 3.5.\nEvery fair connection game on an extension parallel graph possesses a strong equilibrium.\n87 t2 t1 s1 s2 2 2 1 3 3 1 (b)(a) a b e f c d Figure 1: Graph topologies.\nProof.\nWe prove the theorem by induction on the network size |V |.\nLet \u039b be a fair connection game on an EPG G = (V, E).\nThe claim obviously holds if |V | = 2.\nIf the graph G is a parallel composition of two EPG graphs G1 and G2, then the claim follows from Lemma 3.3.\nIt remains to prove the claim for extension composition.\nSuppose the graph G is an extension composition of the graph G1 consisting of a single edge e = (s1, t1) and an EPG G2 = (V2, E2) with terminals s2, t2, such that s = s1 and t = t2.\n(The case that G2 is a single edge is similar.)\nLet T1 be the set of players with source s1 and sink t1 (i.e., their path is in G1).\nLet T2 be the set of players with source and sink in G2.\nLet T3 be the set of players with source s1 and sink in V2 \\ t1.\nLet \u039b1 and \u039b2 be the original game on the respective graphs G1 and G2 with players T1 \u222a T3 and T2 \u222a T3, respectively.\nLet S , S be SE in \u039b1 and \u039b2 respectively.\nWe will show that the profile S = S \u222a S is a SE in the game \u039b.\nSuppose by contradiction that S is not a SE.\nThen, there is a coalition \u0393 of minimal size that can deviate such that the cost of any player j \u2208 \u0393 decreases.\nClearly, T1 \u2229\u0393 = \u03c6, since players in T1 have a single strategy.\nHence, \u0393 \u2286 T2 \u222aT3.\nAny player j \u2208 T2 \u222aT3 cannot improve his cost in \u039b1.\nTherefore, any player j \u2208 T2 \u222a T3 improves his cost in \u039b2.\nHowever, this contradicts the fact that S is a SE in \u039b2.\nIn the following theorem we provide a few examples of topologies in which a strong equilibrium does not exist, showing that our characterization is almost tight.\nTheorem 3.6.\nThe following connection games exist: (1) There exists a multi-commodity fair connection game on a series parallel graph that does not possess a strong equilibrium.\n(2) There exists a single source fair connection game that does not possess a strong equilibrium.\nProof.\nFor claim (1) consider the graph depicted in Figure 1(a).\nThis game has a unique NE where S1 = {e, c}, S2 = {b, f}, and each player has a cost of 5.7 However, consider the following coordinated deviation S .\nS1 = {a, b, c}, 7 In any NE of the game, player 1 will buy the edge e and player 2 will buy the edge f.\nThis is since the alternate path, in the respective part, will cost the player 2.5.\nThus, player 1 (player 2) will buy the edge c (edge b) alone, and each player will have a cost of 5.\ns 2 + 2 2 1 \u2212 2 1 + 3 1 2 \u2212 3 1 1 1 2 \u2212 3 t1 t2 a c d e f h g b Figure 2: Example of a single source connection game that does not admit SE.\nand S2 = {b, c, d}.\nIn this profile, each player pays a cost of 4, and thus improves its cost.\nFor claim (2) consider a single source fair connection game on the graph G depicted in Figure 2.\nThere are two players.\nPlayer i = 1, 2 wishes to connect the source s to its sink ti and the unique NE is S1 = {a, b}, S2 = {a, c}, and each player has a cost of 2.\n8 Then, both players can deviate to S1 = {h, f, d} and S2 = {h, f, e}, and decrease their costs to 2 \u2212 /2.\nUnfortunately, our characterization is not completely tight.\nThe graph in Figure 1(b) is an example of a non-extension parallel graph which always admits a strong equilibrium.\n3.2 Strong Price of Anarchy While the price of anarchy in fair connection games can be as bad as n, the following theorem shows that the strong price of anarchy is bounded by H(n) = n i=1 1 i = \u0398(log n).\nTheorem 3.7.\nThe strong price of anarchy of a fair connection game with n players is at most H(n).\nProof.\nLet \u039b be a fair connection game on the graph G.\nWe denote by \u039b(\u0393) the game played on the graph G by a set of players \u0393, where the action of player i \u2208 \u0393 remains \u03a3i (the same as in \u039b).\nLet S = (S1, ... , Sn) be a profile in the game \u039b.\nWe denote by S(\u0393) = S\u0393 the induced profile of players in \u0393 in the game \u039b(\u0393).\nLet ne(S(\u0393)) denote the load of edge e under the profile S(\u0393) in the game \u039b(\u0393), i.e., ne(S(\u0393)) = |{j|j \u2208 \u0393, e \u2208 Sj}|.\nSimilar to congestion games [16, 13] we denote by \u03a6(S(\u0393)) the potential function of the profile S(\u0393) in the game \u039b(\u0393), where \u03a6(S(\u0393)) = e\u2208E ne(S(\u0393)) j=1 fe(j), and define \u03a6(S(\u03c6)) = 0.\nIn our case, it holds that \u03a6(S) = e\u2208E ce \u00b7 H(ne(S)).\n(1) Let S be a SE, and let S\u2217 be the profile of the optimal solution.\nWe define an order on the players as follows.\nLet \u0393n = {1, ..., n} be the set of all the players.\nFor each k = 8 We can show that this is the unique NE by a simple case analysis: (i) If S1 = {h, f, d} and S2 = {h, f, e}, then player 1 can deviate to S1 = {h, g} and decrease his cost.\n(ii) If S1 = {h, g} and S2 = {h, f, e}, then player 2 can deviate to S2 = {a, c} and decrease his cost.\n(iii) If S1 = {h, g} and S2 = {a, c}, then player 1 can deviate to S1 = {a, b} and decrease his cost.\n88 n, ... , 1, since S is a SE, there exists a player in \u0393k, w.l.o.g. call it player k, such that, ck(S) \u2264 ck(S\u2212\u0393k , S\u2217 \u0393k ).\n(2) In this way, \u0393k is defined recursively, such that for every k = n, ... , 2 it holds that \u0393k\u22121 = \u0393k \\ {k}.\n(I.e., after the renaming, \u0393k = {1, ... , k}.)\nLet ck(S(\u0393k)) denote the cost of player k in the game \u039b(\u0393k) under the induced profile S(\u0393k).\nIt is easy to see that ck(S(\u0393k)) = \u03a6(S(\u0393k)) \u2212 \u03a6(S(\u0393k\u22121)).9 Therefore, ck(S) \u2264 ck(S\u2212\u0393k , S\u2217 \u0393k ) (3) \u2264 ck(S\u2217 (\u0393k)) = \u03a6(S\u2217 (\u0393k)) \u2212 \u03a6(S\u2217 (\u0393k\u22121)).\nSumming over all players, we obtain: i\u2208N ci(S) \u2264 \u03a6(S\u2217 (\u0393n)) \u2212 \u03a6(S\u2217 (\u03c6)) = \u03a6(S\u2217 (\u0393n)) = e\u2208S\u2217 ce \u00b7 H(ne(S\u2217 )) \u2264 e\u2208S\u2217 ce \u00b7 H(n) = H(n) \u00b7 OPT(\u039b), where the first inequality follows since the sum of the right hand side of equation (3) telescopes, and the second equality follows from equation (1).\nNext we bound the SPoA when coalitions of size at most k are allowed.\nTheorem 3.8.\nThe k-SPoA of a fair connection game with n players is at most n k \u00b7 H(k).\nProof.\nLet S be a SE of \u039b, and S\u2217 be the profile of the optimal solution of \u039b.\nTo simplify the proof, we assume that n/k is an integer.\nWe partition the players to n/k groups T1, ... , Tn/k each of size k. Let \u039bj be the game on the graph G played by the set of players Tj.\nLet S(Tj) denote the profile of the k players in Tj in the game \u039bj induced by the profile S of the game \u039b.\nBy Theorem 3.7, it holds that for each game \u039bj, j = 1, ... , n/k, cost\u039bj (S(Tj)) = i\u2208Tj ci(S(Tj)) \u2264 H(k) \u00b7 OPT(\u039bj) \u2264 H(k) \u00b7 OPT(\u039b).\nSumming over all games \u039bj, j = 1, ... , n/k, cost\u039b(S) \u2264 n/k j=1 cost\u039bj (S(Tj)) \u2264 n k \u00b7 H(k) \u00b7 OPT(\u039b), where the first inequality follows since for each group Tj and player i \u2208 Tj, it holds that ci(S) \u2264 ci(S(Tj)).\nNext we show an almost matching lower bound.\n(The lower bound is at most H(n) = O(log n) from the upper bound and both for k = O(1) and k = \u2126(n) the difference is only a constant.)\nTheorem 3.9.\nFor fair connection games with n players, k-SPoA \u2265 max{n k , H(n)}.\n9 This follows since for any strategy profile S, if a single player k deviates to strategy Sk, then the change in the potential value \u03a6(S) \u2212 \u03a6(Sk, S\u2212k) is exactly the change in the cost to player k. t2 s t1 tn\u22122 tn 1 2 t3 tn\u22121 1 1 3 1 n\u22122 2 n 1 + 00 0 0 0 00 0 Figure 3: Example of a network topology in which SPoS > PoS.\nProof.\nFor the lower bound of H(n) we observe that in the example presented in [2], the unique Nash equilibrium is also a strong equilibrium, and therefore k-SPoA = H(n) for any 1 \u2264 k \u2264 n. For the lower bound of n/k, consider a graph composed of two parallel links of costs 1 and n/k.\nConsider the profile S in which all n players use the link of cost n/k.\nThe cost of each player is 1/k, while if any coalition of size at most k deviates to the link of cost 1, the cost of each player is at least 1/k.\nTherefore, the profile S is a k-SE, and k-SPoA = n/k.\nThe results of Theorems 3.7 and 3.8 can be extended to concave cost functions.\nConsider the extended fair connection game, where each edge has a cost which depends on the number of players using that edge, ce(ne).\nWe assume that the cost function ce(ne) is a nondecreasing, concave function.\nNote that the cost of an edge ce(ne) might increase with the number of players using it, but the cost per player fe(ne) = ce(ne)/ne decreases when ce(ne) is concave.\nTheorem 3.10.\nThe strong price of anarchy of a fair connection game with nondecreasing concave edge cost functions and n players is at most H(n).\nProof.\nThe proof is analogues to the proof of Theorem 3.7.\nFor the proof we show that cost(S) \u2264 \u03a6(S\u2217 ) \u2264 H(n)\u00b7cost(S\u2217 ).\nWe first show the first inequality.\nSince the function ce(x) is concave, the cost per player ce(x)/x is a nonincreasing function.\nTherefore inequality (3) in the proof of Theorem 3.7 holds.\nSumming inequality (3) over all players we obtain cost(S) = i ci(S) \u2264 \u03a6(S\u2217 (\u0393n))\u2212\u03a6(S\u2217 (\u03c6)) = \u03a6(S\u2217 ).\nThe second inequality follows since ce(x) is nondecreasing and therefore ne x=1(ce(x)/x) \u2264 H(ne) \u00b7 ce(ne).\nUsing the arguments in the proof of Theorem 3.10 and the proof of Theorem 3.8 we derive, Theorem 3.11.\nThe k-SPoA of a fair connection game with nondecreasing concave edge cost functions and n players is at most n k \u00b7 H(k).\nSince the set of strong equilibria is contained in the set of Nash equilibria, it must hold that SPoA \u2264 PoA, meaning that the SPoA can only be improved compared to the PoA.\nHowever, with respect to the price of stability the opposite direction holds, that is, SPoS \u2265 PoS.\nWe next show that there exists a fair connection game in which the inequality is strict.\n89 2 \u2212 2 \u2212 2 \u2212 3 s t1 t2 t3 Figure 4: Example of a single source general connection game that does not admit a strong equilibrium.\nThe edges that are not labeled with costs have a cost of zero.\nTheorem 3.12.\nThere exists a fair connection game in which SPoS > PoS.\nProof.\nConsider a single source fair connection game on the graph G depicted in Figure 3.10 Player i = 1, ... , n wishes to connect the source s to his sink ti.\nAssume that each player i = 1, ... , n \u2212 2 has his own path of cost 1/i from s to ti and players i = n \u2212 1, n have a joint path of cost 2/n from s to ti.\nAdditionally, all players can share a common path of cost 1+ for some small > 0.\nThe optimal solution connects all players through the common path of cost 1 + , and this is also a Nash equilibrium with total cost 1 + .\nIt is easy to verify that the solution where each player i = 1, ... , n\u22122 uses his own path and users i = n\u22121, n use their joint path is the unique strong equilibrium of this game with total cost n\u22122 i=1 1 i + 2 n = \u0398(log n) While the example above shows that the SPoS may be greater than the PoS, the upper bound of H(n) = \u0398(log n), proven for the PoS [2], serves as an upper bound for the SPoS as well.\nThis is a direct corollary from theorem 3.7, as SPoS \u2264 SPoA by definition.\nCorollary 3.13.\nThe strong price of stability of a fair connection game with n players is at most H(n) = O(log n).\n4.\nGENERAL CONNECTION GAMES In this section, we derive our results for general connection games.\n4.1 Existence of Strong Equilibrium We begin with a characterization of the existence of a strong equilibrium in symmetric general connection games.\nSimilar to Theorem 3.1 (using a similar proof) we establish, Theorem 4.1.\nIn every symmetric fair connection game there exists a strong equilibrium.\nWhile every single source general connection game possesses a pure Nash equilibrium [3], it does not necessarily admit some strong equilibrium.11 10 This is a variation on the example given in [2].\n11 We thank Elliot Anshelevich, whose similar topology for the fair-connection game inspired this example.\nTheorem 4.2.\nThere exists a single source general connection game that does not admit any strong equilibrium.\nProof.\nConsider single source general connection game with 3 players on the graph depicted in Figure 4.\nPlayer i wishes to connect the source s with its sink ti.We need to consider only the NE profiles: (i) if all three players use the link of cost 3, then there must be two agents whose total sum exceeds 2, thus they can both reduce cost by deviating to an edge of cost 2\u2212 .\n(ii) if two of the players use an edge of cost 2\u2212 jointly, and the third player uses a different edge of cost 2 \u2212 , then, the players with non-zero payments can deviate to the path with the edge of cost 3 and reduce their costs (since before the deviation the total payments of the players is 4 \u2212 2 ).\nWe showed that none of the NE are SE, and thus the game does not possess any SE.\nNext we show that for the class of series parallel graphs, there is always a strong equilibrium in the case of a single source.\nTheorem 4.3.\nIn every single source general connection game on a series-parallel graph, there exists a strong equilibrium.\nProof.\nLet \u039b be a single source general connection game on a SPG G = (V, E) with source s and sink t.\nWe present an algorithm that constructs a specific SE.\nWe first consider the following partial order between the players.\nFor players i and j, we have that i \u2192 j if there is a directed path from ti to tj.\nWe complete the partial order to a full order (in an arbitrary way), and w.l.o.g. we assume that 1 \u2192 2 \u2192 \u00b7 \u00b7 \u00b7 \u2192 n.\nThe algorithm COMPUTE-SE, considers the players in an increasing order, starting with player 1.\nEach player i will fully buy a subset of the edges, and any player j > i will consider the cost of those (bought) edges as zero.\nWhen COMPUTE-SE considers player j, the cost of the edges that players 1 to j\u22121 have bought is set to zero, and player j fully buys a shortest path Qj from s to tj.\nNamely, for every edges e \u2208 Qj \\ \u222ai i pays for any edge on any path from s to ti.\nConsider a player k > i and let Qk = Qk \u222a Qk , where Qk is a path connecting tk to t. Let yk be the intersecting vertex of Qk and ti.\nSince there exists a path from s to yk that was fully paid for by players j < k before the deviation, in particularly the path Qi s,yk , player k will not pay for any edge on any path connecting s and yk.\nTherefore player i fully pays for all edges on the path \u00afQi y,ti , i.e., \u00afpi(e) = ce for all edges e \u2208 \u00afQi y,ti .\nNow consider the algorithm COMPUTESE at the step when player i selects a shortest path from the source s to its sink ti and determines his payment pi.\nAt this point, player i could buy the path \u00afQi y,ti , since a path from s to y was already paid for by players j < i. Hence, ci(\u00afp) \u2265 ci(p).\nThis contradicts the fact that player i improved its cost and therefore not all the players in \u0393 reduce their cost.\nThis implies that p is a strong equilibrium.\n4.2 Strong Price of Anarchy While for every single source general connection game, it holds that PoS = 1 [3], the price of anarchy can be as large as n, even for two parallel edges.\nHere, we show that any strong equilibrium in single source general connection games yields the optimal cost.\nTheorem 4.4.\nIn single source general connection game, if there exists a strong equilibrium, then the strong price of anarchy is 1.\nProof.\nLet p = (p1, ... , pn) be a strong equilibrium, and let T\u2217 be the minimum cost Steiner tree on all players, rooted at the (single) source s. Let T\u2217 e be the subtree of T\u2217 disconnected from s when edge e is removed.\nLet \u0393(Te) be the set of players which have sinks in Te.\nFor a set of edges E, let c(E) = e\u2208E ce.\nLet P(Te) = i\u2208\u0393(Te) ci(p).\nAssume by way of contradiction that c(p) > c(T\u2217 ).\nWe will show that there exists a sub-tree T of T\u2217 , that connects a subset of players \u0393 \u2286 N, and a new set of payments \u00afp, such that for each i \u2208 \u0393, ci(\u00afp) < ci(p).\nThis will contradict the assumption that p is a strong equilibrium.\nFirst we show how to find a sub-tree T of T\u2217 , such that for any edge e, the payments of players with sinks in T\u2217 e is more than the cost of T\u2217 e \u222a {e}.\nTo build T , define an edge e to be bad if the cost of T\u2217 e \u222a {e} is at least the payments of the players with sinks in T\u2217 e , i.e., c(T\u2217 e \u222a {e}) \u2265 P(T\u2217 e ).\nLet B be the set of bad edges.\nWe define T to be T\u2217 \u2212 \u222ae\u2208B(T\u2217 e \u222a {e}).\nNote that we can find a subset B of B such that \u222ae\u2208B(T\u2217 e \u222a {e}) is equal to \u222ae\u2208B (T\u2217 e \u222a {e}) and for any e1, e2 \u2208 B we have T\u2217 e1 \u2229 T\u2217 e2 = \u2205.\n(The set B will include any edge e \u2208 B for which there is no other edge e \u2208 B on the path from e to the source s.) Considering the edges in e \u2208 B we can see that any subtree T\u2217 e we delete from T can not decrease the difference between the payments and the cost of the remaining tree.\nTherefore, in T for every edge e, we have that c(Te \u222a {e}) < P(Te).\nNow we have a tree T and our coalition will be \u0393(T ).\nWhat remain is to find payments \u00afp for the players in \u0393(T ) such that they will buy the tree T and every player in \u0393(T ) will lower its cost, i.e. ci(p) > ci(\u00afp) for i \u2208 \u0393(T ).\n(Recall that the payments have the restriction that player i can only pay for edges on the path from s to ti.)\nWe will now define the coalition payments \u00afp. Let ci(\u00afp, Te) = e\u2208Te \u00afpi(e) be the payments of player i for the subtree Te.\nWe will show that for every subtree Te, ci(\u00afp, Te \u222a {e}) < ci(p), and hence ci(\u00afp) < ci(p).\nConsider the following bottom up process that defines \u00afp.\nWe assign the payments of edge e in T , after we assign payments to all the edges in Te.\nThis implies that when we assign payments for e, we have that the sum of the payments in Te is equal to c(Te) = i\u2208\u0393(Te) ci(\u00afp, Te).\nSince e was not a bad edge, we know that c(Te \u222a {e}) = c(Te) + ce < P(Te).\nTherefore, we can update the payments \u00afp of players i \u2208 \u0393(Te), by setting \u00afpi(e) = ce\u2206i/( j\u2208\u0393(Te) \u2206j), where \u2206j = cj(p) \u2212 cj(\u00afp, Te).\nAfter the update we have for player i \u2208 \u0393(Te), ci(\u00afp, Te \u222a {e}) = ci(\u00afp, Te) + \u00afpi(e) = ci(\u00afp, Te) + \u2206i ce j\u2208\u0393(Te) \u2206j = ci(p) \u2212 \u2206i(1 \u2212 ce P(\u0393(Te)) \u2212 c(Te) ), where we used the fact that j\u2208\u0393(Te) \u2206j = P(\u0393(Te))\u2212c(Te).\nSince ce < P(\u0393(Te)) \u2212 c(Te) it follows that ci(\u00afp, Te \u222a {e}) < ci(p).\n5.\nREFERENCES [1] N. Andelman, M. Feldman, and Y. Mansour.\nStrong Price of Anarchy.\nIn SODA``07, 2007.\n[2] E. Anshelevich, A. Dasgupta, J. M. Kleinberg, \u00b4E. Tardos, T. Wexler, and T. Roughgarden.\nThe price of stability for network design with fair cost allocation.\nIn FOCS, pages 295-304, 2004.\n[3] E. Anshelevich, A. Dasgupta, E. Tardos, and T. Wexler.\nNear-Optimal Network Design with Selfish Agents.\nIn STOC``03, 2003.\n[4] R. Aumann.\nAcceptable Points in General Cooperative n-Person Games.\nIn Contributions to the Theory of Games, volume 4, 1959.\n[5] A. Czumaj and B. V\u00a8ocking.\nTight bounds for worst-case equilibria.\nIn SODA, pages 413-420, 2002.\n[6] A. Fabrikant, A. Luthra, E. Maneva, C. Papadimitriou, and S. Shenker.\nOn a network creation game.\nIn ACM Symposium on Principles of Distriubted Computing (PODC), 2003.\n[7] R. Holzman and N. Law-Yone.\nStrong equilibrium in congestion games.\nGames and Economic Behavior, 21:85-101, 1997.\n[8] R. Holzman and N. L.-Y.\n(Lev-tov).\nNetwork structure and strong equilibrium in route selection games.\nMathematical Social Sciences, 46:193-205, 2003.\n[9] E. Koutsoupias and C. H. Papadimitriou.\nWorst-case equilibria.\nIn STACS, pages 404-413, 1999.\n[10] I. Milchtaich.\nTopological conditions for uniqueness of equilibrium in networks.\nMathematics of Operations Research, 30:225244, 2005.\n[11] I. Milchtaich.\nNetwork topology and the efficiency of equilibrium.\nGames and Economic Behavior, 57:321346, 2006.\n[12] I. Milchtaich.\nThe equilibrium existence problem in finite network congestion games.\nForthcoming in Lecture Notes in Computer Science, 2007.\n[13] D. Monderer and L. S. Shapley.\nPotential Games.\nGames and Economic Behavior, 14:124-143, 1996.\n[14] H. Moulin and S. Shenker.\nStrategyproof sharing of 91 submodular costs: Budget balance versus efficiency.\nEconomic Theory, 18(3):511-533, 2001.\n[15] C. Papadimitriou.\nAlgorithms, Games, and the Internet.\nIn Proceedings of 33rd STOC, pages 749-753, 2001.\n[16] R. W. Rosenthal.\nA class of games possessing pure-strategy Nash equilibria.\nInternational Journal of Game Theory, 2:65-67, 1973.\n[17] T. Roughgarden.\nThe Price of Anarchy is Independent of the Network Topology.\nIn STOC``02, pages 428-437, 2002.\n[18] T. Roughgarden and E. Tardos.\nHow bad is selfish routing?\nJournal of the ACM, 49(2):236 - 259, 2002.\n[19] O. Rozenfeld and M. Tennenholtz.\nStrong and correlated strong equilibria in monotone congestion games.\nIn Workshop on Internet and Network Economics, 2006.\n92", "lvl-3": "Strong Equilibrium in Cost Sharing Connection Games *\nABSTRACT\nIn this work we study cost sharing connection games , where each player has a source and sink he would like to connect , and the cost of the edges is either shared equally ( fair connection games ) or in an arbitrary way ( general connection games ) .\nWe study the graph topologies that guarantee the existence of a strong equilibrium ( where no coalition can improve the cost of each of its members ) regardless of the specific costs on the edges .\nOur main existence results are the following : ( 1 ) For a single source and sink we show that there is always a strong equilibrium ( both for fair and general connection games ) .\n( 2 ) For a single source multiple sinks we show that for a series parallel graph a strong equilibrium always exists ( both for fair and general connection games ) .\n( 3 ) For multi source and sink we show that an extension parallel graph always admits a strong equilibrium in fair connection games .\nAs for the quality of the strong equilibrium we show that in any fair connection games the cost of a strong equilibrium is \u0398 ( log n ) from the optimal solution , where n is the number of players .\n( This should be contrasted with the \u03a9 ( n ) price of anarchy for the same setting . )\nFor single source general connection games and single source single sink fair connection games , we show that a strong equilibrium is always an optimal solution .\n* Research supported in part by a grant of the Israel Science Foundation , Binational Science Foundation ( BSF ) , GermanIsraeli Foundation ( GIF ) , Lady Davis Fellowship , an IBM faculty award , and the IST Programme of the European Community , under the PASCAL Network of Excellence , IST-2002-506778 .\nThis publication only reflects the authors ' views .\n1 .\nINTRODUCTION\nComputational game theory has introduced the issue of incentives to many of the classical combinatorial optimization problems .\nThe view that the demand side is many times not under the control of a central authority that optimizes the global performance , but rather under the control of individuals with different incentives , has led already to many important insights .\nConsider classical routing and transportation problems such as multicast or multi-commodity problems , which are many times viewed as follows .\nWe are given a graph with edge costs and connectivity demands between nodes , and our goal is to find a minimal cost solution .\nThe classical centralized approach assumes that all the individual demands can both be completely coordinated and have no individual incentives .\nThe game theory point of view would assume that each individual demand is controlled by a player that optimizes its own utility , and the resulting outcome could be far from the optimal solution .\nWhen considering individual incentives one needs to discuss the appropriate solution concept .\nMuch of the research in computational game theory has focused on the classical Nash equilibrium as the primary solution concept .\nIndeed Nash equilibrium has many benefits , and most importantly it always exists ( in mixed strategies ) .\nHowever , the solution concept of Nash equilibrium is resilient only to unilateral deviations , while in reality , players may be able to coordinate their actions .\nA strong equilibrium [ 4 ] is a state from which no coalition ( of any size ) can deviate and improve the utility of every member of the coalition ( while possibly lowering the utility\nof players outside the coalition ) .\nThis resilience to deviations by coalitions of the players is highly attractive , and one can hope that once a strong equilibrium is reached it is highly likely to sustain .\nFrom a computational game theory point of view , an additional benefit of a strong equilibrium is that it has a potential to reduce the distance between the optimal solution and the solution obtained as an outcome of selfish behavior .\nThe strong price of anarchy ( SPoA ) , introduced in [ 1 ] , is the ratio between the cost of the worst strong equilibrium and the cost of an optimal solution .\nObviously , SPoA is meaningful only in those cases where a strong equilibrium exists .\nA major downside of strong equilibrium is that most games do not admit any strong equilibrium .\nEven simple classical games like the prisoner 's dilemma do not posses any strong equilibrium ( which is also an example of a congestion game that does not posses a strong equilibriums ) .\nThis unfortunate fact has reduced the concentration in strong equilibrium , despite its highly attractive properties .\nYet , [ 1 ] have identified two broad families of games , namely job scheduling and network formation , where a strong equilibrium always exists and the SPoA is significantly lower than the price of anarchy ( which is the ratio between the worst Nash equilibrium and the optimal solution [ 15 , 18 , 5 , 6 ] ) .\nIn this work we concentrate on cost sharing connection games , introduced by [ 3 , 2 ] .\nIn such a game , there is an underlying directed graph with edge costs , and individual users have connectivity demands ( between a source and a sink ) .\nWe consider two models .\nThe fair cost connection model [ 2 ] allows each player to select a path from the source to the sink2 .\nIn this game the cost of an edge is shared equally between all the players that selected the edge , and the cost of the player is the sum of its costs on the edges it selected .\nThe general connection game [ 3 ] allows each player to offer prices for edges .\nIn this game an edge is bought if the sum of the offers at least covers its cost , and the cost of the player is the sum of its offers on the bought edges ( in both games we assume that the player has to guarantee the connectivity between its source and sink ) .\nIn this work we focus on two important issues .\nThe first one is identifying under what conditions the existence of a strong equilibrium is guaranteed , and the second one is the quality of the strong equilibria .\nFor the existence part , we identify families of graph topologies that possess some strong equilibrium for any assignment of edge costs .\nOne can view this separation between the graph topology and the edge costs , as a separation between the underlying infrastructure and the costs the players observe to purchase edges .\nWhile one expects the infrastructure to be stable over long periods of time , the costs the players observe can be easily modified over short time periods .\nSuch a topological characterization of the underlying infrastructure provides a network designer topological conditions that will ensure stability in his network .\nOur results are as follows .\nFor the single commodity case ( all the players have the same source and sink ) , there is a strong equilibrium in any graph ( both for fair and general connection games ) .\nMoreover , the strong equilibrium is also swhile any congestion game is known to admit at least one Nash equilibrium in pure strategies [ 16 ] .\n2The fair cost sharing scheme is also attractive from a mechanism design point of view , as it is a strategyproof costsharing mechanism [ 14 ] .\nthe optimal solution ( namely , the players share a shortest path from the common source to the common sink ) .\nFor the case of a single source and multiple sinks ( for example , in a multicast tree ) , we show that in a fair connection game there is a strong equilibrium if the underlying graph is a series parallel graph , and we show an example of a nonseries parallel graph that does not have a strong equilibrium .\nFor the case of multi-commodity ( multi sources and sinks ) , we show that in a fair connection game if the graph is an extension parallel graph then there is always a strong equilibrium , and we show an example of a series parallel graph that does not have a strong equilibrium .\nAs far as we know , we are the first to provide a topological characterization for equilibrium existence in multi-commodity and single-source network games .\nFor any fair connection game we show that if there exists a strong equilibrium it is at most a factor of \u0398 ( log n ) from the optimal solution , where n is the number of players .\nThis should be contrasted with the \u0398 ( n ) bound that exists for the price of anarchy [ 2 ] .\nFor single source general connection games , we show that any series parallel graph possesses a strong equilibrium , and we show an example of a graph that does not have a strong equilibrium .\nIn this case we also show that any strong equilibrium is optimal .\nRelated work\nTopological characterizations for single-commodity network games have been recently provided for various equilibrium properties , including equilibrium existence [ 12 , 7 , 8 ] , equilibrium uniqueness [ 10 ] and equilibrium efficiency [ 17 , 11 ] .\nThe existence of pure Nash equilibrium in single-commodity network congestion games with player-specific costs or weights was studied in [ 12 ] .\nThe existence of strong equilibrium was studied in both utility-decreasing ( e.g. , routing ) and utility-increasing ( e.g. , fair cost-sharing ) congestion games .\n[ 7 , 8 ] have provided a full topological characterization for a SE existence in single-commodity utility-decreasing congestion games , and showed that a SE always exists if and only if the underlying graph is extension-parallel .\n[ 19 ] have shown that in single-commodity utility-increasing congestion games , the topological characterization is essentially equivalent to parallel links .\nIn addition , they have shown that these results hold for correlated strong equilibria as well ( in contrast to the decreasing setting , where correlated strong equilibria might not exist at all ) .\nWhile the fair cost sharing games we study are utility increasing network congestion games , we derive a different characterization than [ 19 ] due to the different assumptions regarding the players ' actions .3\n2 .\nMODEL\n2.1 Game Theory definitions\n2.2 Cost Sharing Connection Games\n2.3 Extension Parallel and Series Parallel Directed Graphs\n3 .\nFAIR CONNECTION GAMES\n3.1 Existence of Strong Equilibrium\n3.2 Strong Price of Anarchy\n4 .\nGENERAL CONNECTION GAMES\nIn this section , we derive our results for general connection games .\n4.1 Existence of Strong Equilibrium\nWe begin with a characterization of the existence of a strong equilibrium in symmetric general connection games .\nSimilar to Theorem 3.1 ( using a similar proof ) we establish , THEOREM 4.1 .\nIn every symmetric fair connection game there exists a strong equilibrium .\nWhile every single source general connection game possesses a pure Nash equilibrium [ 3 ] , it does not necessarily admit some strong equilibrium .11\nthe fair-connection game inspired this example .\nTHEOREM 4.2 .\nThere exists a single source general connection game that does not admit any strong equilibrium .\nPROOF .\nConsider single source general connection game with 3 players on the graph depicted in Figure 4 .\nPlayer i wishes to connect the source s with its sink ti.We need to consider only the NE profiles : ( i ) if all three players use the link of cost 3 , then there must be two agents whose total sum exceeds 2 , thus they can both reduce cost by deviating to an edge of cost 2 \u2212 E. ( ii ) if two of the players use an edge of cost 2 \u2212 e jointly , and the third player uses a different edge of cost 2 \u2212 e , then , the players with non-zero payments can deviate to the path with the edge of cost 3 and reduce their costs ( since before the deviation the total payments of the players is 4 \u2212 2e ) .\nWe showed that none of the NE are SE , and thus the game does not possess any SE .\nNext we show that for the class of series parallel graphs , there is always a strong equilibrium in the case of a single source .\nPROOF .\nLet \u039b be a single source general connection game on a SPG G = ( V , E ) with source s and sink t .\nWe present an algorithm that constructs a specific SE .\nWe first consider the following partial order between the players .\nFor players i and j , we have that i \u2192 j if there is a directed path from ti to tj .\nWe complete the partial order to a full order ( in an arbitrary way ) , and w.l.o.g. we assume that 1 \u2192 2 \u2192 \u00b7 \u00b7 \u00b7 \u2192 n .\nThe algorithm COMPUTE-SE , considers the players in an increasing order , starting with player 1 .\nEach player i will fully buy a subset of the edges , and any player j > i will consider the cost of those ( bought ) edges as zero .\nWhen COMPUTE-SE considers player j , the cost of the edges that players 1 to j \u2212 1 have bought is set to zero , and player j fully buys a shortest path Qj from s to tj .\nNamely , for every edges e G Qj \\ Ui < jQi we have pj ( e ) = ce and otherwise pj ( e ) = 0 .\nWe next show that the algorithm COMPUTESE computes a SE .\nAssume by way of contradiction that the profile p is not a SE .\nThen , there exists a coalition that can improve the costs of all its players by a deviation .\nLet \u0393 be such a coalition of minimal size and let player i = max { j G \u0393 } .\nFor a player j G \u0393 let \u00af Qj and \u00af pj be the path and payment of player j after the deviation , respectively .\nLet Q ' be a path from the sink of player i , i.e. ti , to the sink of G , i.e. t .\nThen Q = \u00af Qi U Q ' is a path from the source s to the sink t. For any player j < i , let yj be the intersecting vertex of Q and tj ( by Lemma 2.1 one is guarantee to exist ) .\nLet y be the furthest vertex on the path Q such that y = yj for some j < i .\nThe path from the source s to node y was fully paid for by players j < i in p ( before the deviation ) .\nThere are two cases we consider .\ncase a : After the deviation player i does not pay for edges in U j \u2208 \u0393 \\ { i } \u00af Qj .\nThis is a contradiction to the minimality of the coalition \u0393 size , since the players in \u0393 \\ { i } can form a smaller coalition with payments \u00af p. case b : Otherwise , we show that player i cost after the deviation , i.e. ci ( \u00af p ) , is at least his cost before the deviation , i.e. ci ( p ) , contradicting the fact that player i improved his cost .\nRecall that given two vertices u , v on path Q \u00af we denote by \u00af Qu , v the subpath of Q \u00af from u to v.\nBefore the deviation of the coalition \u0393 , a path from s to y was fully paid for by the players j < i. Next we show that no player k > i pays for any edge on any path from s to ti .\nConsider a player k > i and let Q0k = Qk U Q00k , where Q00k is a path connecting tk to t. Let yk be the intersecting vertex of Q0k and ti .\nSince there exists a path from s to yk that was fully paid for by players j < k before the deviation , in particularly the path Qis , yk , player k will not pay for any edge on any path connecting s and yk .\nTherefore player i fully pays for all edges on the path \u00af Qiy , ti , i.e. , \u00af pi ( e ) = ce for all edges e E \u00af Qiy , ti .\nNow consider the algorithm COMPUTESE at the step when player i selects a shortest path from the source s to its sink ti and determines his payment pi .\nAt this point , player i could buy the path \u00af Qiy , ti , since a path from s to y was already paid for by players j < i. Hence , ci ( \u00af p ) > ci ( p ) .\nThis contradicts the fact that player i improved its cost and therefore not all the players in \u0393 reduce their cost .\nThis implies that p is a strong equilibrium .\n4.2 Strong Price of Anarchy\nWhile for every single source general connection game , it holds that PoS = 1 [ 3 ] , the price of anarchy can be as large as n , even for two parallel edges .\nHere , we show that any strong equilibrium in single source general connection games yields the optimal cost .\nPROOF .\nLet p = ( p1 , ... , pn ) be a strong equilibrium , and let T \u2217 be the minimum cost Steiner tree on all players , rooted at the ( single ) source s. Let Te \u2217 be the subtree of T \u2217 disconnected from s when edge e is removed .\nLet \u0393 ( Te ) be the set of players which have sinks in Te .\nFor a set of edges E , let c ( E ) = Ee \u2208 E ce .\nLet P ( Te ) = Ei \u2208 \u0393 ( Te ) ci ( p ) .\nAssume by way of contradiction that c ( p ) > c ( T \u2217 ) .\nWe will show that there exists a sub-tree T0 of T \u2217 , that connects a subset of players \u0393 C _ N , and a new set of payments \u00af p , such that for each i E \u0393 , ci ( \u00af p ) < ci ( p ) .\nThis will contradict the assumption that p is a strong equilibrium .\nFirst we show how to find a sub-tree T0 of T \u2217 , such that for any edge e , the payments of players with sinks in Te \u2217 is more than the cost of Te \u2217 U { e } .\nTo build T0 , define an edge e to be bad if the cost of Te \u2217 U { e } is at least the payments of the players with sinks in Te \u2217 , i.e. , c ( Te \u2217 U { e } ) > P ( Te \u2217 ) .\nLet B be the set of bad edges .\nWe define T0 to be T \u2217 \u2212 Ue \u2208 B ( Te \u2217 U { e } ) .\nNote that we can find a subset B0 of B such that Ue \u2208 B ( Te \u2217 U { e } ) is equal to Ue \u2208 B ( Te \u2217 U { e } ) and for any e1 , e2 E B0 we have T \u2217 e1 n T \u2217 ee = 0 .\n( The set B0 will include any edge e E B for which there is no other edge e0 E B on the path from e to the source s. ) Considering the edges in e E B0 we can see that any subtree Te \u2217 we delete from T can not decrease the difference between the payments and the cost of the remaining tree .\nTherefore , in T0 for every edge e , we have that c ( Te0 U { e } ) < P ( T0e ) .\nNow we have a tree T0 and our coalition will be \u0393 ( T0 ) .\nWhat remain is to find payments p \u00af for the players in \u0393 ( T0 ) such that they will buy the tree T0 and every player in \u0393 ( T0 ) will lower its cost , i.e. ci ( p ) > ci ( \u00af p ) for i E \u0393 ( T0 ) .\n( Recall that the payments have the restriction that player i can only pay for edges on the path from s to ti . )\nWe will now define the coalition payments \u00af p. Let ci ( \u00af p , T0 e \u2208 Te \u00af pi ( e ) be the payments of player i for the subtree T0e .\nWe will show that for every subtree T0 e , ci ( \u00af p , Te0 U { e } ) < ci ( p ) , and hence ci ( \u00af p ) < ci ( p ) .\nConsider the following bottom up process that defines \u00af p .\nWe assign the payments of edge e in T0 , after we assign payments to all the edges in T0e .\nThis implies that when we assign payments for e , we have that the sum of the payments in Te0 is equal to\nknow that c ( Te0 U { e } ) = c ( T0e ) + ce < P ( T0e ) .\nTherefore , we can update the payments p \u00af of players i E \u0393 ( T0e ) , by setting\nwhere we used the fact that E e ) .\nSince ce < P ( \u0393 ( T0e ) ) \u2212 c ( T0e ) it follows that ci ( \u00af p , Te0 U { e } ) < ci ( p ) .", "lvl-4": "Strong Equilibrium in Cost Sharing Connection Games *\nABSTRACT\nIn this work we study cost sharing connection games , where each player has a source and sink he would like to connect , and the cost of the edges is either shared equally ( fair connection games ) or in an arbitrary way ( general connection games ) .\nWe study the graph topologies that guarantee the existence of a strong equilibrium ( where no coalition can improve the cost of each of its members ) regardless of the specific costs on the edges .\nOur main existence results are the following : ( 1 ) For a single source and sink we show that there is always a strong equilibrium ( both for fair and general connection games ) .\n( 2 ) For a single source multiple sinks we show that for a series parallel graph a strong equilibrium always exists ( both for fair and general connection games ) .\n( 3 ) For multi source and sink we show that an extension parallel graph always admits a strong equilibrium in fair connection games .\nAs for the quality of the strong equilibrium we show that in any fair connection games the cost of a strong equilibrium is \u0398 ( log n ) from the optimal solution , where n is the number of players .\n( This should be contrasted with the \u03a9 ( n ) price of anarchy for the same setting . )\nFor single source general connection games and single source single sink fair connection games , we show that a strong equilibrium is always an optimal solution .\n* Research supported in part by a grant of the Israel Science Foundation , Binational Science Foundation ( BSF ) , GermanIsraeli Foundation ( GIF ) , Lady Davis Fellowship , an IBM faculty award , and the IST Programme of the European Community , under the PASCAL Network of Excellence , IST-2002-506778 .\nThis publication only reflects the authors ' views .\n1 .\nINTRODUCTION\nComputational game theory has introduced the issue of incentives to many of the classical combinatorial optimization problems .\nConsider classical routing and transportation problems such as multicast or multi-commodity problems , which are many times viewed as follows .\nWe are given a graph with edge costs and connectivity demands between nodes , and our goal is to find a minimal cost solution .\nThe game theory point of view would assume that each individual demand is controlled by a player that optimizes its own utility , and the resulting outcome could be far from the optimal solution .\nWhen considering individual incentives one needs to discuss the appropriate solution concept .\nMuch of the research in computational game theory has focused on the classical Nash equilibrium as the primary solution concept .\nIndeed Nash equilibrium has many benefits , and most importantly it always exists ( in mixed strategies ) .\nHowever , the solution concept of Nash equilibrium is resilient only to unilateral deviations , while in reality , players may be able to coordinate their actions .\nA strong equilibrium [ 4 ] is a state from which no coalition ( of any size ) can deviate and improve the utility of every member of the coalition ( while possibly lowering the utility\nof players outside the coalition ) .\nThis resilience to deviations by coalitions of the players is highly attractive , and one can hope that once a strong equilibrium is reached it is highly likely to sustain .\nFrom a computational game theory point of view , an additional benefit of a strong equilibrium is that it has a potential to reduce the distance between the optimal solution and the solution obtained as an outcome of selfish behavior .\nThe strong price of anarchy ( SPoA ) , introduced in [ 1 ] , is the ratio between the cost of the worst strong equilibrium and the cost of an optimal solution .\nObviously , SPoA is meaningful only in those cases where a strong equilibrium exists .\nA major downside of strong equilibrium is that most games do not admit any strong equilibrium .\nEven simple classical games like the prisoner 's dilemma do not posses any strong equilibrium ( which is also an example of a congestion game that does not posses a strong equilibriums ) .\nThis unfortunate fact has reduced the concentration in strong equilibrium , despite its highly attractive properties .\nIn this work we concentrate on cost sharing connection games , introduced by [ 3 , 2 ] .\nIn such a game , there is an underlying directed graph with edge costs , and individual users have connectivity demands ( between a source and a sink ) .\nWe consider two models .\nThe fair cost connection model [ 2 ] allows each player to select a path from the source to the sink2 .\nIn this game the cost of an edge is shared equally between all the players that selected the edge , and the cost of the player is the sum of its costs on the edges it selected .\nThe general connection game [ 3 ] allows each player to offer prices for edges .\nIn this game an edge is bought if the sum of the offers at least covers its cost , and the cost of the player is the sum of its offers on the bought edges ( in both games we assume that the player has to guarantee the connectivity between its source and sink ) .\nIn this work we focus on two important issues .\nThe first one is identifying under what conditions the existence of a strong equilibrium is guaranteed , and the second one is the quality of the strong equilibria .\nFor the existence part , we identify families of graph topologies that possess some strong equilibrium for any assignment of edge costs .\nOne can view this separation between the graph topology and the edge costs , as a separation between the underlying infrastructure and the costs the players observe to purchase edges .\nWhile one expects the infrastructure to be stable over long periods of time , the costs the players observe can be easily modified over short time periods .\nOur results are as follows .\nFor the single commodity case ( all the players have the same source and sink ) , there is a strong equilibrium in any graph ( both for fair and general connection games ) .\nMoreover , the strong equilibrium is also swhile any congestion game is known to admit at least one Nash equilibrium in pure strategies [ 16 ] .\n2The fair cost sharing scheme is also attractive from a mechanism design point of view , as it is a strategyproof costsharing mechanism [ 14 ] .\nthe optimal solution ( namely , the players share a shortest path from the common source to the common sink ) .\nFor the case of a single source and multiple sinks ( for example , in a multicast tree ) , we show that in a fair connection game there is a strong equilibrium if the underlying graph is a series parallel graph , and we show an example of a nonseries parallel graph that does not have a strong equilibrium .\nFor the case of multi-commodity ( multi sources and sinks ) , we show that in a fair connection game if the graph is an extension parallel graph then there is always a strong equilibrium , and we show an example of a series parallel graph that does not have a strong equilibrium .\nAs far as we know , we are the first to provide a topological characterization for equilibrium existence in multi-commodity and single-source network games .\nFor any fair connection game we show that if there exists a strong equilibrium it is at most a factor of \u0398 ( log n ) from the optimal solution , where n is the number of players .\nThis should be contrasted with the \u0398 ( n ) bound that exists for the price of anarchy [ 2 ] .\nFor single source general connection games , we show that any series parallel graph possesses a strong equilibrium , and we show an example of a graph that does not have a strong equilibrium .\nIn this case we also show that any strong equilibrium is optimal .\nRelated work\nTopological characterizations for single-commodity network games have been recently provided for various equilibrium properties , including equilibrium existence [ 12 , 7 , 8 ] , equilibrium uniqueness [ 10 ] and equilibrium efficiency [ 17 , 11 ] .\nThe existence of pure Nash equilibrium in single-commodity network congestion games with player-specific costs or weights was studied in [ 12 ] .\nThe existence of strong equilibrium was studied in both utility-decreasing ( e.g. , routing ) and utility-increasing ( e.g. , fair cost-sharing ) congestion games .\n[ 7 , 8 ] have provided a full topological characterization for a SE existence in single-commodity utility-decreasing congestion games , and showed that a SE always exists if and only if the underlying graph is extension-parallel .\n[ 19 ] have shown that in single-commodity utility-increasing congestion games , the topological characterization is essentially equivalent to parallel links .\nIn addition , they have shown that these results hold for correlated strong equilibria as well ( in contrast to the decreasing setting , where correlated strong equilibria might not exist at all ) .\nWhile the fair cost sharing games we study are utility increasing network congestion games , we derive a different characterization than [ 19 ] due to the different assumptions regarding the players ' actions .3\n4 .\nGENERAL CONNECTION GAMES\nIn this section , we derive our results for general connection games .\n4.1 Existence of Strong Equilibrium\nWe begin with a characterization of the existence of a strong equilibrium in symmetric general connection games .\nSimilar to Theorem 3.1 ( using a similar proof ) we establish , THEOREM 4.1 .\nIn every symmetric fair connection game there exists a strong equilibrium .\nWhile every single source general connection game possesses a pure Nash equilibrium [ 3 ] , it does not necessarily admit some strong equilibrium .11\nthe fair-connection game inspired this example .\nTHEOREM 4.2 .\nThere exists a single source general connection game that does not admit any strong equilibrium .\nPROOF .\nConsider single source general connection game with 3 players on the graph depicted in Figure 4 .\nWe showed that none of the NE are SE , and thus the game does not possess any SE .\nNext we show that for the class of series parallel graphs , there is always a strong equilibrium in the case of a single source .\nPROOF .\nLet \u039b be a single source general connection game on a SPG G = ( V , E ) with source s and sink t .\nWe first consider the following partial order between the players .\nFor players i and j , we have that i \u2192 j if there is a directed path from ti to tj .\nThe algorithm COMPUTE-SE , considers the players in an increasing order , starting with player 1 .\nEach player i will fully buy a subset of the edges , and any player j > i will consider the cost of those ( bought ) edges as zero .\nWhen COMPUTE-SE considers player j , the cost of the edges that players 1 to j \u2212 1 have bought is set to zero , and player j fully buys a shortest path Qj from s to tj .\nNamely , for every edges e G Qj \\ Ui < jQi we have pj ( e ) = ce and otherwise pj ( e ) = 0 .\nWe next show that the algorithm COMPUTESE computes a SE .\nAssume by way of contradiction that the profile p is not a SE .\nThen , there exists a coalition that can improve the costs of all its players by a deviation .\nLet \u0393 be such a coalition of minimal size and let player i = max { j G \u0393 } .\nFor a player j G \u0393 let \u00af Qj and \u00af pj be the path and payment of player j after the deviation , respectively .\nLet Q ' be a path from the sink of player i , i.e. ti , to the sink of G , i.e. t .\nThen Q = \u00af Qi U Q ' is a path from the source s to the sink t. For any player j < i , let yj be the intersecting vertex of Q and tj ( by Lemma 2.1 one is guarantee to exist ) .\nLet y be the furthest vertex on the path Q such that y = yj for some j < i .\nThe path from the source s to node y was fully paid for by players j < i in p ( before the deviation ) .\nThere are two cases we consider .\ncase a : After the deviation player i does not pay for edges in U j \u2208 \u0393 \\ { i } \u00af Qj .\nBefore the deviation of the coalition \u0393 , a path from s to y was fully paid for by the players j < i. Next we show that no player k > i pays for any edge on any path from s to ti .\nConsider a player k > i and let Q0k = Qk U Q00k , where Q00k is a path connecting tk to t. Let yk be the intersecting vertex of Q0k and ti .\nSince there exists a path from s to yk that was fully paid for by players j < k before the deviation , in particularly the path Qis , yk , player k will not pay for any edge on any path connecting s and yk .\nTherefore player i fully pays for all edges on the path \u00af Qiy , ti , i.e. , \u00af pi ( e ) = ce for all edges e E \u00af Qiy , ti .\nNow consider the algorithm COMPUTESE at the step when player i selects a shortest path from the source s to its sink ti and determines his payment pi .\nAt this point , player i could buy the path \u00af Qiy , ti , since a path from s to y was already paid for by players j < i. Hence , ci ( \u00af p ) > ci ( p ) .\nThis contradicts the fact that player i improved its cost and therefore not all the players in \u0393 reduce their cost .\nThis implies that p is a strong equilibrium .\n4.2 Strong Price of Anarchy\nWhile for every single source general connection game , it holds that PoS = 1 [ 3 ] , the price of anarchy can be as large as n , even for two parallel edges .\nHere , we show that any strong equilibrium in single source general connection games yields the optimal cost .\nPROOF .\nLet p = ( p1 , ... , pn ) be a strong equilibrium , and let T \u2217 be the minimum cost Steiner tree on all players , rooted at the ( single ) source s. Let Te \u2217 be the subtree of T \u2217 disconnected from s when edge e is removed .\nLet \u0393 ( Te ) be the set of players which have sinks in Te .\nFor a set of edges E , let c ( E ) = Ee \u2208 E ce .\nAssume by way of contradiction that c ( p ) > c ( T \u2217 ) .\nWe will show that there exists a sub-tree T0 of T \u2217 , that connects a subset of players \u0393 C _ N , and a new set of payments \u00af p , such that for each i E \u0393 , ci ( \u00af p ) < ci ( p ) .\nThis will contradict the assumption that p is a strong equilibrium .\nFirst we show how to find a sub-tree T0 of T \u2217 , such that for any edge e , the payments of players with sinks in Te \u2217 is more than the cost of Te \u2217 U { e } .\nTo build T0 , define an edge e to be bad if the cost of Te \u2217 U { e } is at least the payments of the players with sinks in Te \u2217 , i.e. , c ( Te \u2217 U { e } ) > P ( Te \u2217 ) .\nLet B be the set of bad edges .\nTherefore , in T0 for every edge e , we have that c ( Te0 U { e } ) < P ( T0e ) .\nWhat remain is to find payments p \u00af for the players in \u0393 ( T0 ) such that they will buy the tree T0 and every player in \u0393 ( T0 ) will lower its cost , i.e. ci ( p ) > ci ( \u00af p ) for i E \u0393 ( T0 ) .\n( Recall that the payments have the restriction that player i can only pay for edges on the path from s to ti . )\nWe will now define the coalition payments \u00af p. Let ci ( \u00af p , T0 e \u2208 Te \u00af pi ( e ) be the payments of player i for the subtree T0e .\nConsider the following bottom up process that defines \u00af p .\nWe assign the payments of edge e in T0 , after we assign payments to all the edges in T0e .\nTherefore , we can update the payments p \u00af of players i E \u0393 ( T0e ) , by setting\nwhere we used the fact that E e ) .", "lvl-2": "Strong Equilibrium in Cost Sharing Connection Games *\nABSTRACT\nIn this work we study cost sharing connection games , where each player has a source and sink he would like to connect , and the cost of the edges is either shared equally ( fair connection games ) or in an arbitrary way ( general connection games ) .\nWe study the graph topologies that guarantee the existence of a strong equilibrium ( where no coalition can improve the cost of each of its members ) regardless of the specific costs on the edges .\nOur main existence results are the following : ( 1 ) For a single source and sink we show that there is always a strong equilibrium ( both for fair and general connection games ) .\n( 2 ) For a single source multiple sinks we show that for a series parallel graph a strong equilibrium always exists ( both for fair and general connection games ) .\n( 3 ) For multi source and sink we show that an extension parallel graph always admits a strong equilibrium in fair connection games .\nAs for the quality of the strong equilibrium we show that in any fair connection games the cost of a strong equilibrium is \u0398 ( log n ) from the optimal solution , where n is the number of players .\n( This should be contrasted with the \u03a9 ( n ) price of anarchy for the same setting . )\nFor single source general connection games and single source single sink fair connection games , we show that a strong equilibrium is always an optimal solution .\n* Research supported in part by a grant of the Israel Science Foundation , Binational Science Foundation ( BSF ) , GermanIsraeli Foundation ( GIF ) , Lady Davis Fellowship , an IBM faculty award , and the IST Programme of the European Community , under the PASCAL Network of Excellence , IST-2002-506778 .\nThis publication only reflects the authors ' views .\n1 .\nINTRODUCTION\nComputational game theory has introduced the issue of incentives to many of the classical combinatorial optimization problems .\nThe view that the demand side is many times not under the control of a central authority that optimizes the global performance , but rather under the control of individuals with different incentives , has led already to many important insights .\nConsider classical routing and transportation problems such as multicast or multi-commodity problems , which are many times viewed as follows .\nWe are given a graph with edge costs and connectivity demands between nodes , and our goal is to find a minimal cost solution .\nThe classical centralized approach assumes that all the individual demands can both be completely coordinated and have no individual incentives .\nThe game theory point of view would assume that each individual demand is controlled by a player that optimizes its own utility , and the resulting outcome could be far from the optimal solution .\nWhen considering individual incentives one needs to discuss the appropriate solution concept .\nMuch of the research in computational game theory has focused on the classical Nash equilibrium as the primary solution concept .\nIndeed Nash equilibrium has many benefits , and most importantly it always exists ( in mixed strategies ) .\nHowever , the solution concept of Nash equilibrium is resilient only to unilateral deviations , while in reality , players may be able to coordinate their actions .\nA strong equilibrium [ 4 ] is a state from which no coalition ( of any size ) can deviate and improve the utility of every member of the coalition ( while possibly lowering the utility\nof players outside the coalition ) .\nThis resilience to deviations by coalitions of the players is highly attractive , and one can hope that once a strong equilibrium is reached it is highly likely to sustain .\nFrom a computational game theory point of view , an additional benefit of a strong equilibrium is that it has a potential to reduce the distance between the optimal solution and the solution obtained as an outcome of selfish behavior .\nThe strong price of anarchy ( SPoA ) , introduced in [ 1 ] , is the ratio between the cost of the worst strong equilibrium and the cost of an optimal solution .\nObviously , SPoA is meaningful only in those cases where a strong equilibrium exists .\nA major downside of strong equilibrium is that most games do not admit any strong equilibrium .\nEven simple classical games like the prisoner 's dilemma do not posses any strong equilibrium ( which is also an example of a congestion game that does not posses a strong equilibriums ) .\nThis unfortunate fact has reduced the concentration in strong equilibrium , despite its highly attractive properties .\nYet , [ 1 ] have identified two broad families of games , namely job scheduling and network formation , where a strong equilibrium always exists and the SPoA is significantly lower than the price of anarchy ( which is the ratio between the worst Nash equilibrium and the optimal solution [ 15 , 18 , 5 , 6 ] ) .\nIn this work we concentrate on cost sharing connection games , introduced by [ 3 , 2 ] .\nIn such a game , there is an underlying directed graph with edge costs , and individual users have connectivity demands ( between a source and a sink ) .\nWe consider two models .\nThe fair cost connection model [ 2 ] allows each player to select a path from the source to the sink2 .\nIn this game the cost of an edge is shared equally between all the players that selected the edge , and the cost of the player is the sum of its costs on the edges it selected .\nThe general connection game [ 3 ] allows each player to offer prices for edges .\nIn this game an edge is bought if the sum of the offers at least covers its cost , and the cost of the player is the sum of its offers on the bought edges ( in both games we assume that the player has to guarantee the connectivity between its source and sink ) .\nIn this work we focus on two important issues .\nThe first one is identifying under what conditions the existence of a strong equilibrium is guaranteed , and the second one is the quality of the strong equilibria .\nFor the existence part , we identify families of graph topologies that possess some strong equilibrium for any assignment of edge costs .\nOne can view this separation between the graph topology and the edge costs , as a separation between the underlying infrastructure and the costs the players observe to purchase edges .\nWhile one expects the infrastructure to be stable over long periods of time , the costs the players observe can be easily modified over short time periods .\nSuch a topological characterization of the underlying infrastructure provides a network designer topological conditions that will ensure stability in his network .\nOur results are as follows .\nFor the single commodity case ( all the players have the same source and sink ) , there is a strong equilibrium in any graph ( both for fair and general connection games ) .\nMoreover , the strong equilibrium is also swhile any congestion game is known to admit at least one Nash equilibrium in pure strategies [ 16 ] .\n2The fair cost sharing scheme is also attractive from a mechanism design point of view , as it is a strategyproof costsharing mechanism [ 14 ] .\nthe optimal solution ( namely , the players share a shortest path from the common source to the common sink ) .\nFor the case of a single source and multiple sinks ( for example , in a multicast tree ) , we show that in a fair connection game there is a strong equilibrium if the underlying graph is a series parallel graph , and we show an example of a nonseries parallel graph that does not have a strong equilibrium .\nFor the case of multi-commodity ( multi sources and sinks ) , we show that in a fair connection game if the graph is an extension parallel graph then there is always a strong equilibrium , and we show an example of a series parallel graph that does not have a strong equilibrium .\nAs far as we know , we are the first to provide a topological characterization for equilibrium existence in multi-commodity and single-source network games .\nFor any fair connection game we show that if there exists a strong equilibrium it is at most a factor of \u0398 ( log n ) from the optimal solution , where n is the number of players .\nThis should be contrasted with the \u0398 ( n ) bound that exists for the price of anarchy [ 2 ] .\nFor single source general connection games , we show that any series parallel graph possesses a strong equilibrium , and we show an example of a graph that does not have a strong equilibrium .\nIn this case we also show that any strong equilibrium is optimal .\nRelated work\nTopological characterizations for single-commodity network games have been recently provided for various equilibrium properties , including equilibrium existence [ 12 , 7 , 8 ] , equilibrium uniqueness [ 10 ] and equilibrium efficiency [ 17 , 11 ] .\nThe existence of pure Nash equilibrium in single-commodity network congestion games with player-specific costs or weights was studied in [ 12 ] .\nThe existence of strong equilibrium was studied in both utility-decreasing ( e.g. , routing ) and utility-increasing ( e.g. , fair cost-sharing ) congestion games .\n[ 7 , 8 ] have provided a full topological characterization for a SE existence in single-commodity utility-decreasing congestion games , and showed that a SE always exists if and only if the underlying graph is extension-parallel .\n[ 19 ] have shown that in single-commodity utility-increasing congestion games , the topological characterization is essentially equivalent to parallel links .\nIn addition , they have shown that these results hold for correlated strong equilibria as well ( in contrast to the decreasing setting , where correlated strong equilibria might not exist at all ) .\nWhile the fair cost sharing games we study are utility increasing network congestion games , we derive a different characterization than [ 19 ] due to the different assumptions regarding the players ' actions .3\n2 .\nMODEL\n2.1 Game Theory definitions\nA game \u039b = < N , ( \u03a3i ) , ( ci ) > has a finite set N = { 1 , ... , n } of players .\nPlayer i E N has a set \u03a3i of actions , the joint action set is \u03a3 = \u03a3s x \u00b7 \u00b7 \u00b7 x \u03a3n and a joint action S E \u03a3 is also called a profile .\nThe cost function of player i is\nci : \u03a3 -- + R + , which maps the joint action S E \u03a3 to a non-negative real number .\nLet S = ( S1 , ... , Sn ) denote the profile of actions taken by the players , and let S \u2212 i = ( S1 , ... , Si \u2212 1 , Si +1 , ... , Sn ) denote the profile of actions taken by all players other than player i. Note that S = ( Si , S \u2212 i ) .\nThe social cost of a game A is the sum of the costs of the players , and we denote by OPT ( A ) the minimal social cost of a game A. i.e. , OPT ( A ) = minS \u2208 \u03a3 cost\u039b ( S ) , where cost\u039b ( S ) = Ei \u2208 N ci ( S ) .\nA joint action S E \u03a3 is a pure Nash equilibrium if no player i E N can benefit from unilaterally deviating from his action to another action , i.e. , ` di E N ` dS0i E \u03a3i : ci ( S \u2212 i , S0i ) > ci ( S ) .\nWe denote by NE ( A ) the set of pure Nash equilibria in the game A. Resilience to coalitions : A pure deviation of a set of players \u0393 C N ( also called coalition ) specifies an action for each player in the coalition , i.e. , - y E Xi \u2208 \u0393\u03a3i .\nA joint action S E \u03a3 is not resilient to a pure deviation of a coalition \u0393 if there is a pure joint action - y of \u0393 such that ci ( S \u2212 \u0393 , - y ) < ci ( S ) for every i E \u0393 ( i.e. , the players in the coalition can deviate in such a way that each player in the coalition reduces its cost ) .\nA pure Nash equilibrium S E \u03a3 is a k-strong equilibrium , if there is no coalition \u0393 of size at most k , such that S is not resilient to a pure deviation by \u0393 .\nWe denote by k-SE ( A ) the set of k-strong equilibria in the game A .\nWe denote by SE ( A ) the set of n-strong equilibria , and call S E SE ( A ) a strong equilibrium ( SE ) .\nNext we define the Price of Anarchy [ 9 ] , Price of Stability [ 2 ] , and their extension to Strong Price of Anarchy and Strong Price of Stability .\nof anarchy ( k-SPoA ) for the game A .\nThe Price of Anarchy ( PoA ) is the ratio between the maximal cost of a pure Nash equilibrium ( assuming one exists ) and the social optimum , i.e. , maxS \u2208 NE ( \u039b ) cost\u039b ( S ) / OPT ( A ) .\nSimilarly , the Price of Stability ( PoS ) is the ratio between the minimal cost of a pure Nash equilibrium and the social optimum , i.e. , minS \u2208 NE ( \u039b ) cost\u039b ( S ) / OPT ( A ) .\nThe k-Strong Price of Anarchy ( k-SPoA ) is the ratio between the maximal cost of a k-strong equilibrium ( assuming one exists ) and the social optimum , i.e. , maxS \u2208 k-SE ( \u039b ) cost\u039b ( S ) / OPT ( A ) .\nThe SPoA is the n-SPoA .\nSimilarly , the Strong Price of Stability ( SPoS ) is the ratio between the minimal cost of a pure strong equilibrium and the social optimum , i.e. , minS \u2208 SE ( \u039b ) cost\u039b ( S ) / OPT ( A ) .\nNote that both k-SPoA and SPoS are defined only if some strong equilibrium exists .\n2.2 Cost Sharing Connection Games\nA cost sharing connection game has an underlying directed graph G = ( V , E ) where each edge e E E has an associated cost ce > 04 .\nIn a connection game each player i E N has an associated source si and sink ti .\nIn a fair connection game the actions \u03a3i of player i include all the paths from si to ti .\nThe cost of each edge is shared equally by the set of all players whose paths contain it .\nGiven a joint action , the cost of a player is the sum of his costs on the edges it selected .\nMore formally , the cost function of each player on an edge e , in a joint action S , is fe ( ne ( S ) ) = ce ne ( S ) , where ne ( S ) is the number of players that selected a path containing edge e in ci ( S ) = E S .\nThe cost of player i , when selecting path Qi E \u03a3i is e \u2208 Qi fe ( ne ( S ) ) .\n4In some of the existence proofs , we assume that ce > 0 for simplicity .\nThe full version contains the complete proofs for the case ce > 0 .\nIn a general connection game the actions \u03a3i of player i is a payment vector pi , where pi ( e ) is how much player i is offering to contribute to the cost of edge e. 5 Given a profile p , any edge e such that Ei pi ( e ) > ce is considered bought , and Ep denotes the set of bought edges .\nLet Gp = ( V , Ep ) denote the graph bought by the players for profile p = ( p1 , ... , pn ) .\nClearly , each player tries to minimize his total payment which is ci ( p ) = & \u2208 Ep pi ( e ) if si is connected to ti in Gp , and infinity otherwise .6 We denote by c ( p ) = Ei ci ( p ) the total cost under the profile p. For a subgraph H of G we denote the total cost of the edges in H by c ( H ) .\nA symmetric connection game implies that the source and sink of all the players are identical .\n( We also call a symmetric connection game a single source single sink connection game , or a single commodity connection game . )\nA single source connection game implies that the sources of all the players are identical .\nFinally , A multi commodity connection game implies that each player has its own source and sink .\n2.3 Extension Parallel and Series Parallel Directed Graphs\nOur directed graphs would be acyclic , and would have a source node ( from which all nodes are reachable ) and a sink node ( which every node can reach ) .\nWe first define the following actions for composition of directed graphs .\n\u2022 Identification : The identification operation allows to collapse two nodes to one .\nMore formally , given graph G = ( V , E ) we define the identification of a node v1 E V and v2 E V forming a new node v E V as creating a new graph G0 = ( V0 , E0 ) , where V 0 = V -- { v1 , v2 } U { v } and E0 includes the edges of E where the edges of v1 and v2 are now connected to v. \u2022 Parallel composition : Given two directed graphs , G1 = ( V1 , E1 ) and G2 = ( V2 , E2 ) , with sources s1 E V1 and s2 E V2 and sinks t1 E V1 and t2 E V2 , respectively , we define a new graph G = G1IIG2 as follows .\nLet G0 = ( V1 U V2 , E1 U E2 ) be the union graph .\nTo create G = G1IIG2 we identify the sources s1 and s2 , forming a new source node s , and identify the sinks t1 and t2 , forming a new sink t. \u2022 Series composition : Given two directed graphs , G1 = ( V1 , E1 ) and G2 = ( V2 , E2 ) , with sources s1 E V1 and s2 E V2 and sinks t1 E V1 and t2 E V2 , respectively , we define a new graph G = G1 -- + G2 as follows .\nLet G0 = ( V1 U V2 , E1 U E2 ) be the union graph .\nTo create G = G1 -- + G2 we identify the vertices t1 and s2 , forming a new vertex u .\nThe graph G has a source s = s1 and a sink t = t2 .\n\u2022 Extension composition : A series composition when\none of the graphs , G1 or G2 , is composed of a single directed edge is an extension composition , and we denote it by G = G1 -- + e G2 .\nAn extension parallel graph ( EPG ) is a graph G consisting of either : ( 1 ) a single directed edge ( s , t ) , ( 2 ) a graph G = G1IIG2 or ( 3 ) a graph G = G1 -- + e G2 , where G1 and G2 are\nextension parallel graphs ( and in the extension composition either G1 or G2 is a single edge . )\n.\nA series parallel graph ( SPG ) is a graph G consisting of either : ( 1 ) a single directed edge ( s , t ) , ( 2 ) a graph G = G1 | | G2 or ( 3 ) a graph G = G1 \u2192 G2 , where G1 and G2 are series parallel graphs .\nGiven a path Q and two vertices u , v on Q , we denote the subpath of Q from u to v by Qu , v .\nThe following lemma , whose proof appears in the full version , would be the main topological tool in the case of single source graph .\nLEMMA 2.1 .\nLet G be an SPG with source s and sink t. Given a path Q , from s to t , and a vertex t ' , there exist a vertex y \u2208 Q , such that for any path Q ' from s to t ' , the path Q ' contains y and the paths Q ' y , t , and Q are edge disjoint .\n( We call the vertex y the intersecting vertex of Q and t ' . )\n3 .\nFAIR CONNECTION GAMES\nThis section derives our results for fair connection games .\n3.1 Existence of Strong Equilibrium\nWhile it is known that every fair connection game possesses a Nash equilibrium in pure strategies [ 2 ] , this is not necessarily the case for a strong equilibrium .\nIn this section , we study the existence of strong equilibrium in fair connection games .\nWe begin with a simple case , showing that every symmetric fair connection game possesses a strong equilibrium .\nPROOF .\nLet s ' be the source and t ' be the sink of all the players .\nWe show that a profile S in which all the players choose the same shortest path Q ( from the source s ' to the sink t ' ) is a strong equilibrium .\nSuppose by contradiction that S is not a SE .\nThen there is a coalition \u0393 that can deviate to a new profile S ' such that the cost of every player j \u2208 \u0393 decreases .\nLet Q ' j be a new path used by player j \u2208 \u0393 .\nSince Q is a shortest path , it holds that c ( Q ' j \\ ( Q \u2229 Q ' j ) ) \u2265 c ( Q \\ ( Q \u2229 Q ' j ) ) , for any path Q ' j. Therefore for every player j \u2208 \u0393 we have that cj ( S ' ) \u2265 cj ( S ) .\nHowever , this contradicts the fact that all players in \u0393 reduce their cost .\n( In fact , no player in \u0393 has reduced its cost . )\nWhile every symmetric fair connection game admits a SE , it does not hold for every fair connection game .\nIn what follows , we study the network topologies that admit a strong equilibrium for any assignment of edge costs , and give examples of topologies for which a strong equilibrium does not exist .\nThe following lemma , whose proof appears in the full version , plays a major role in our proofs of the existence of SE .\nLEMMA 3.2 .\nLet \u039b be a fair connection game on a series parallel graph G with source s and sink t. Assume that player i has si = s and ti = t and that \u039b has some SE .\nLet S be a SE that minimizes the cost of player i ( out of all SE ) , i.e. , ci ( S ) = minT ESE ( \u039b ) ci ( T ) and let S * be the profile that minimizes the cost of player i ( out of all possible profiles ) , i.e. , ci ( S * ) = minT E\u03a3 ci ( T ) .\nThen , ci ( S ) = ci ( S * ) .\nThe next lemma considers parallel composition .\nLEMMA 3.3 .\nLet \u039b be a fair connection game on graph G = G1 | | G2 , where G1 and G2 are series parallel graphs .\nIf every fair connection game on the graphs G1 and G2 possesses a strong equilibrium , then the game \u039b possesses a strong equilibrium .\nPROOF .\nLet G1 = ( V1 , E1 ) and G2 = ( V2 , E2 ) have sources s1 and s2 and sinks t1 and t2 , respectively .\nLet Ti be the set of players with an endpoint in Vi \\ { s , t } , for i \u2208 { 1 , 2 } .\n( An endpoint is either a source or a sink of a player ) .\nLet T3 be the set of players j such that sj = s and tj = t. Let \u039b1 and \u039b2 be the original game on the respective graphs G1 and G2 with players T1 \u222a T3 and T2 \u222a T3 , respectively .\nLet S ' and S ' ' be the SE in \u039b1 and \u039b2 that minimizes the cost of players in T3 , respectively .\nAssume w.l.o.g. that ci ( S ' ) \u2264 ci ( S ' ' ) where player i \u2208 T3 .\nIn addition , let \u039b ' 2 be the game on the graph G2 with players T2 and let S \u00af be a SE in \u039b ' 2 .\nWe will show that the profile S = S ' \u222a S \u00af is a SE in \u039b .\nSuppose by contradiction that S is not a SE .\nThen , there is a coalition \u0393 that can deviate such that the cost of every player j \u2208 \u0393 decreases .\nBy Lemma 3.2 and the assumption that ci ( S ' ) \u2264 ci ( S ' ' ) , a player j \u2208 T3 can not improve his cost .\nTherefore , \u0393 \u2286 T1 \u222a T2 .\nBut this is a contradiction to S ' being a SE in \u039b1 or S \u00af being a SE in \u039b ' 2 .\nThe following theorem considers the case of single source fair connection games .\nPROOF .\nWe prove the theorem by induction on the network size | V | .\nThe claim obviously holds if | V | = 2 .\nWe show the claim for a series composition , i.e. , G = G1 \u2192 G2 , and for a parallel composition , i.e. , G = G1 | | G2 , where G1 = ( V1 , E1 ) and G2 = ( V2 , E2 ) are SPG 's with sources s1 , s2 , and sinks t1 , t2 , respectively .\nseries composition .\nLet G = G1 \u2192 G2 .\nLet T1 be the set of players j such that tj \u2208 V1 , and T2 be the set of players j such that tj \u2208 V2 \\ { s2 } .\nLet \u039b1 and \u039b2 be the original game on the respective graphs G1 and G2 with players T1 \u222a T2 and T2 , respectively .\nFor every player i \u2208 T2 with action Si in the game \u039b let Si \u2229 E1 be his induced action in the game \u039b1 , and let Si \u2229 E2 be his induced action in the game \u039b2 .\nLet S ' be a SE in \u039b1 that minimizes the cost of players in T2 ( such a SE exists by the induction hypothesis and Lemma 3.2 ) .\nLet S ' ' be any SE in \u039b2 .\nWe will show that the profile S = S ' \u222a S ' ' is a SE in the game \u039b , i.e. , for player j \u2208 T2 we use the profile Sj = S ' j \u222a S ' ' j .\nSuppose by contradiction that S is not a SE .\nThen , there is a coalition \u0393 that can deviate such that the cost of every player j \u2208 \u0393 decreases .\nNow , there are two cases : Case 1 : \u0393 \u2286 T1 .\nThis is a contradiction to S ' being a SE .\nCase 2 : There exists a player j \u2208 \u0393 \u2229 T2 .\nBy Lemma 3.2 , player j can not improve his cost in \u039b1 so the improvement is due to \u039b2 .\nConsider the coalition \u0393 \u2229 T2 , it would still improve its cost .\nHowever , this contradicts the fact that S ' ' is a SE in \u039b2 .\nparallel composition .\nFollows from Lemma 3.3 .\nWhile multi-commodity fair connection games on series parallel graphs do not necessarily possess a SE ( see Theorem 3.6 ) , fair connection games on extension parallel graphs always possess a strong equilibrium .\nTHEOREM 3.5 .\nEvery fair connection game on an extension parallel graph possesses a strong equilibrium .\nFigure 1 : Graph topologies .\nPROOF .\nWe prove the theorem by induction on the network size | V | .\nLet \u039b be a fair connection game on an EPG G = ( V , E ) .\nThe claim obviously holds if | V | = 2 .\nIf the graph G is a parallel composition of two EPG graphs G1 and G2 , then the claim follows from Lemma 3.3 .\nIt remains to prove the claim for extension composition .\nSuppose the graph G is an extension composition of the graph G1 consisting of a single edge e = ( s1 , t1 ) and an EPG G2 = ( V2 , E2 ) with terminals s2 , t2 , such that s = s1 and t = t2 .\n( The case that G2 is a single edge is similar . )\nLet T1 be the set of players with source s1 and sink t1 ( i.e. , their path is in G1 ) .\nLet T2 be the set of players with source and sink in G2 .\nLet T3 be the set of players with source s1 and sink in V2 \\ t1 .\nLet \u039b1 and \u039b2 be the original game on the respective graphs G1 and G2 with players T1 \u222a T3 and T2 \u222a T3 , respectively .\nLet S0 , S00 be SE in \u039b1 and \u039b2 respectively .\nWe will show that the profile S = S0 \u222a S00 is a SE in the game \u039b .\nSuppose by contradiction that S is not a SE .\nThen , there is a coalition \u0393 of minimal size that can deviate such that the cost of any player j \u2208 \u0393 decreases .\nClearly , T1 \u2229 \u0393 = \u03c6 , since players in T1 have a single strategy .\nHence , \u0393 \u2286 T2 \u222a T3 .\nAny player j \u2208 T2 \u222a T3 can not improve his cost in \u039b1 .\nTherefore , any player j \u2208 T2 \u222a T3 improves his cost in \u039b2 .\nHowever , this contradicts the fact that S00 is a SE in \u039b2 .\nIn the following theorem we provide a few examples of topologies in which a strong equilibrium does not exist , showing that our characterization is almost tight .\nTHEOREM 3.6 .\nThe following connection games exist : ( 1 ) There exists a multi-commodity fair connection game on a series parallel graph that does not possess a strong equilibrium .\n( 2 ) There exists a single source fair connection game that does not possess a strong equilibrium .\nPROOF .\nFor claim ( 1 ) consider the graph depicted in Figure 1 ( a ) .\nThis game has a unique NE where S1 = { e , c } , S2 = { b , f } , and each player has a cost of 5.7 However , consider the following coordinated deviation S0 .\nS01 = { a , b , c } , 7In any NE of the game , player 1 will buy the edge e and player 2 will buy the edge f .\nThis is since the alternate path , in the respective part , will cost the player 2.5 .\nThus , player 1 ( player 2 ) will buy the edge c ( edge b ) alone , and each player will have a cost of 5 .\nFigure 2 : Example of a single source connection game that does not admit SE .\nand S02 = { b , c , d } .\nIn this profile , each player pays a cost of 4 , and thus improves its cost .\nFor claim ( 2 ) consider a single source fair connection game on the graph G depicted in Figure 2 .\nThere are two players .\nPlayer i = 1 , 2 wishes to connect the source s to its sink ti and the unique NE is S1 = { a , b } , S2 = { a , c } , and each player has a cost of 2 .\n8 Then , both players can deviate to S01 = { h , f , d } and S02 = { h , f , e } , and decrease their costs to 2 \u2212 e/2 .\nUnfortunately , our characterization is not completely tight .\nThe graph in Figure 1 ( b ) is an example of a non-extension parallel graph which always admits a strong equilibrium .\n3.2 Strong Price of Anarchy\nWhile the price of anarchy in fair connection games can be as bad as n , the following theorem shows that the strong\nTHEOREM 3.7 .\nThe strong price of anarchy of a fair connection game with n players is at most H ( n ) .\nPROOF .\nLet \u039b be a fair connection game on the graph G .\nWe denote by \u039b ( \u0393 ) the game played on the graph G by a set of players \u0393 , where the action of player i \u2208 \u0393 remains \u03a3i ( the same as in \u039b ) .\nLet S = ( S1 , ... , Sn ) be a profile in the game \u039b .\nWe denote by S ( \u0393 ) = S\u0393 the induced profile of players in \u0393 in the game \u039b ( \u0393 ) .\nLet ne ( S ( \u0393 ) ) denote the load of edge e under the profile S ( \u0393 ) in the game \u039b ( \u0393 ) , i.e. , ne ( S ( \u0393 ) ) = | { j | j \u2208 \u0393 , e \u2208 Sj } | .\nSimilar to congestion games [ 16 , 13 ] we denote by 4 ) ( S ( \u0393 ) ) the potential function of the profile S ( \u0393 ) in the game \u039b ( \u0393 ) , where 4 ) ( S ( \u0393 ) ) = ne ( S ( \u0393 ) )\nand define 4 ) ( S ( \u03c6 ) ) = 0 .\nIn our case , it holds that\nLet S be a SE , and let S \u2217 be the profile of the optimal solution .\nWe define an order on the players as follows .\nLet \u0393n = { 1 , ... , n } be the set of all the players .\nFor each k =\nn , ... , 1 , since S is a SE , there exists a player in \u0393k , w.l.o.g. call it player k , such that ,\nwhere the first inequality follows since the sum of the right hand side of equation ( 3 ) telescopes , and the second equality follows from equation ( 1 ) .\nNext we bound the SPoA when coalitions of size at most k are allowed .\nTHEOREM 3.8 .\nThe k-SPoA of a fair connection game with n players is at most nk \u00b7 H ( k ) .\nPROOF .\nLet S be a SE of \u039b , and S \u2217 be the profile of the optimal solution of \u039b .\nTo simplify the proof , we assume that n/k is an integer .\nWe partition the players to n/k groups T1 , ... , Tn/k each of size k. Let \u039bj be the game on the graph G played by the set of players Tj .\nLet S ( Tj ) denote the profile of the k players in Tj in the game \u039bj induced by the profile S of the game \u039b .\nBy Theorem 3.7 , it holds that for each game \u039bj , j = 1 , ... , n/k ,\nwhere the first inequality follows since for each group Tj and player i E Tj , it holds that ci ( S ) < ci ( S ( Tj ) ) .\nNext we show an almost matching lower bound .\n( The lower bound is at most H ( n ) = O ( log n ) from the upper bound and both for k = O ( 1 ) and k = \u03a9 ( n ) the difference is only a constant . )\nFigure 3 : Example of a network topology in which SPoS > PoS .\nPROOF .\nFor the lower bound of H ( n ) we observe that in the example presented in [ 2 ] , the unique Nash equilibrium is also a strong equilibrium , and therefore k-SPoA = H ( n ) for any 1 < k < n. For the lower bound of n/k , consider a graph composed of two parallel links of costs 1 and n/k .\nConsider the profile S in which all n players use the link of cost n/k .\nThe cost of each player is 1/k , while if any coalition of size at most k deviates to the link of cost 1 , the cost of each player is at least 1/k .\nTherefore , the profile S is a k-SE , and k-SPoA = n/k .\nThe results of Theorems 3.7 and 3.8 can be extended to concave cost functions .\nConsider the extended fair connection game , where each edge has a cost which depends on the number of players using that edge , ce ( ne ) .\nWe assume that the cost function ce ( ne ) is a nondecreasing , concave function .\nNote that the cost of an edge ce ( ne ) might increase with the number of players using it , but the cost per player fe ( ne ) = ce ( ne ) / ne decreases when ce ( ne ) is concave .\nTHEOREM 3.10 .\nThe strong price of anarchy of a fair connection game with nondecreasing concave edge cost functions and n players is at most H ( n ) .\nPROOF .\nThe proof is analogues to the proof of Theorem 3.7 .\nFor the proof we show that cost ( S ) < 4 ) ( S \u2217 ) < H ( n ) \u00b7 cost ( S \u2217 ) .\nWe first show the first inequality .\nSince the function ce ( x ) is concave , the cost per player ce ( x ) / x is a nonincreasing function .\nTherefore inequality ( 3 ) in the proof of Theorem 3.7 holds .\nSumming inequality ( 3 ) over all players we obtain cost ( S ) = Ei ci ( S ) < 4 ) ( S \u2217 ( \u0393n ) ) \u2212 4 ) ( S \u2217 ( \u03c6 ) ) = 4 ) ( S \u2217 ) .\nThe second inequality follows since ce ( x ) is nondecreasing and therefore Enex = 1 ( ce ( x ) / x ) < H ( ne ) \u00b7 ce ( ne ) .\nUsing the arguments in the proof of Theorem 3.10 and the proof of Theorem 3.8 we derive ,\nSince the set of strong equilibria is contained in the set of Nash equilibria , it must hold that SPoA < PoA , meaning that the SPoA can only be improved compared to the PoA .\nHowever , with respect to the price of stability the opposite direction holds , that is , SPoS > PoS .\nWe next show that there exists a fair connection game in which the inequality is strict .\nFigure 4 : Example of a single source general connection game that does not admit a strong equilibrium .\nThe edges that are not labeled with costs have a cost of zero .\nTHEOREM 3.12 .\nThere exists a fair connection game in which SPoS > PoS .\nPROOF .\nConsider a single source fair connection game on the graph G depicted in Figure 3.10 Player i = 1 , ... , n wishes to connect the source s to his sink ti .\nAssume that each player i = 1 , ... , n \u2212 2 has his own path of cost 1/i from s to ti and players i = n \u2212 1 , n have a joint path of cost 2/n from s to ti .\nAdditionally , all players can share a common path of cost 1 + e for some small e > 0 .\nThe optimal solution connects all players through the common path of cost 1 + e , and this is also a Nash equilibrium with total cost 1 + E .\nIt is easy to verify that the solution where each player i = 1 , ... , n \u2212 2 uses his own path and users i = n \u2212 1 , n use their joint path is the unique strong equilibrium of this game with total cost En \u2212 2 i + 2\nWhile the example above shows that the SPoS may be greater than the PoS , the upper bound of H ( n ) = \u0398 ( log n ) , proven for the PoS [ 2 ] , serves as an upper bound for the SPoS as well .\nThis is a direct corollary from theorem 3.7 , as SPoS < SPoA by definition .\nCOROLLARY 3.13 .\nThe strong price of stability of a fair connection game with n players is at most H ( n ) = O ( log n ) .\n4 .\nGENERAL CONNECTION GAMES\nIn this section , we derive our results for general connection games .\n4.1 Existence of Strong Equilibrium\nWe begin with a characterization of the existence of a strong equilibrium in symmetric general connection games .\nSimilar to Theorem 3.1 ( using a similar proof ) we establish , THEOREM 4.1 .\nIn every symmetric fair connection game there exists a strong equilibrium .\nWhile every single source general connection game possesses a pure Nash equilibrium [ 3 ] , it does not necessarily admit some strong equilibrium .11\nthe fair-connection game inspired this example .\nTHEOREM 4.2 .\nThere exists a single source general connection game that does not admit any strong equilibrium .\nPROOF .\nConsider single source general connection game with 3 players on the graph depicted in Figure 4 .\nPlayer i wishes to connect the source s with its sink ti.We need to consider only the NE profiles : ( i ) if all three players use the link of cost 3 , then there must be two agents whose total sum exceeds 2 , thus they can both reduce cost by deviating to an edge of cost 2 \u2212 E. ( ii ) if two of the players use an edge of cost 2 \u2212 e jointly , and the third player uses a different edge of cost 2 \u2212 e , then , the players with non-zero payments can deviate to the path with the edge of cost 3 and reduce their costs ( since before the deviation the total payments of the players is 4 \u2212 2e ) .\nWe showed that none of the NE are SE , and thus the game does not possess any SE .\nNext we show that for the class of series parallel graphs , there is always a strong equilibrium in the case of a single source .\nPROOF .\nLet \u039b be a single source general connection game on a SPG G = ( V , E ) with source s and sink t .\nWe present an algorithm that constructs a specific SE .\nWe first consider the following partial order between the players .\nFor players i and j , we have that i \u2192 j if there is a directed path from ti to tj .\nWe complete the partial order to a full order ( in an arbitrary way ) , and w.l.o.g. we assume that 1 \u2192 2 \u2192 \u00b7 \u00b7 \u00b7 \u2192 n .\nThe algorithm COMPUTE-SE , considers the players in an increasing order , starting with player 1 .\nEach player i will fully buy a subset of the edges , and any player j > i will consider the cost of those ( bought ) edges as zero .\nWhen COMPUTE-SE considers player j , the cost of the edges that players 1 to j \u2212 1 have bought is set to zero , and player j fully buys a shortest path Qj from s to tj .\nNamely , for every edges e G Qj \\ Ui < jQi we have pj ( e ) = ce and otherwise pj ( e ) = 0 .\nWe next show that the algorithm COMPUTESE computes a SE .\nAssume by way of contradiction that the profile p is not a SE .\nThen , there exists a coalition that can improve the costs of all its players by a deviation .\nLet \u0393 be such a coalition of minimal size and let player i = max { j G \u0393 } .\nFor a player j G \u0393 let \u00af Qj and \u00af pj be the path and payment of player j after the deviation , respectively .\nLet Q ' be a path from the sink of player i , i.e. ti , to the sink of G , i.e. t .\nThen Q = \u00af Qi U Q ' is a path from the source s to the sink t. For any player j < i , let yj be the intersecting vertex of Q and tj ( by Lemma 2.1 one is guarantee to exist ) .\nLet y be the furthest vertex on the path Q such that y = yj for some j < i .\nThe path from the source s to node y was fully paid for by players j < i in p ( before the deviation ) .\nThere are two cases we consider .\ncase a : After the deviation player i does not pay for edges in U j \u2208 \u0393 \\ { i } \u00af Qj .\nThis is a contradiction to the minimality of the coalition \u0393 size , since the players in \u0393 \\ { i } can form a smaller coalition with payments \u00af p. case b : Otherwise , we show that player i cost after the deviation , i.e. ci ( \u00af p ) , is at least his cost before the deviation , i.e. ci ( p ) , contradicting the fact that player i improved his cost .\nRecall that given two vertices u , v on path Q \u00af we denote by \u00af Qu , v the subpath of Q \u00af from u to v.\nBefore the deviation of the coalition \u0393 , a path from s to y was fully paid for by the players j < i. Next we show that no player k > i pays for any edge on any path from s to ti .\nConsider a player k > i and let Q0k = Qk U Q00k , where Q00k is a path connecting tk to t. Let yk be the intersecting vertex of Q0k and ti .\nSince there exists a path from s to yk that was fully paid for by players j < k before the deviation , in particularly the path Qis , yk , player k will not pay for any edge on any path connecting s and yk .\nTherefore player i fully pays for all edges on the path \u00af Qiy , ti , i.e. , \u00af pi ( e ) = ce for all edges e E \u00af Qiy , ti .\nNow consider the algorithm COMPUTESE at the step when player i selects a shortest path from the source s to its sink ti and determines his payment pi .\nAt this point , player i could buy the path \u00af Qiy , ti , since a path from s to y was already paid for by players j < i. Hence , ci ( \u00af p ) > ci ( p ) .\nThis contradicts the fact that player i improved its cost and therefore not all the players in \u0393 reduce their cost .\nThis implies that p is a strong equilibrium .\n4.2 Strong Price of Anarchy\nWhile for every single source general connection game , it holds that PoS = 1 [ 3 ] , the price of anarchy can be as large as n , even for two parallel edges .\nHere , we show that any strong equilibrium in single source general connection games yields the optimal cost .\nPROOF .\nLet p = ( p1 , ... , pn ) be a strong equilibrium , and let T \u2217 be the minimum cost Steiner tree on all players , rooted at the ( single ) source s. Let Te \u2217 be the subtree of T \u2217 disconnected from s when edge e is removed .\nLet \u0393 ( Te ) be the set of players which have sinks in Te .\nFor a set of edges E , let c ( E ) = Ee \u2208 E ce .\nLet P ( Te ) = Ei \u2208 \u0393 ( Te ) ci ( p ) .\nAssume by way of contradiction that c ( p ) > c ( T \u2217 ) .\nWe will show that there exists a sub-tree T0 of T \u2217 , that connects a subset of players \u0393 C _ N , and a new set of payments \u00af p , such that for each i E \u0393 , ci ( \u00af p ) < ci ( p ) .\nThis will contradict the assumption that p is a strong equilibrium .\nFirst we show how to find a sub-tree T0 of T \u2217 , such that for any edge e , the payments of players with sinks in Te \u2217 is more than the cost of Te \u2217 U { e } .\nTo build T0 , define an edge e to be bad if the cost of Te \u2217 U { e } is at least the payments of the players with sinks in Te \u2217 , i.e. , c ( Te \u2217 U { e } ) > P ( Te \u2217 ) .\nLet B be the set of bad edges .\nWe define T0 to be T \u2217 \u2212 Ue \u2208 B ( Te \u2217 U { e } ) .\nNote that we can find a subset B0 of B such that Ue \u2208 B ( Te \u2217 U { e } ) is equal to Ue \u2208 B ( Te \u2217 U { e } ) and for any e1 , e2 E B0 we have T \u2217 e1 n T \u2217 ee = 0 .\n( The set B0 will include any edge e E B for which there is no other edge e0 E B on the path from e to the source s. ) Considering the edges in e E B0 we can see that any subtree Te \u2217 we delete from T can not decrease the difference between the payments and the cost of the remaining tree .\nTherefore , in T0 for every edge e , we have that c ( Te0 U { e } ) < P ( T0e ) .\nNow we have a tree T0 and our coalition will be \u0393 ( T0 ) .\nWhat remain is to find payments p \u00af for the players in \u0393 ( T0 ) such that they will buy the tree T0 and every player in \u0393 ( T0 ) will lower its cost , i.e. ci ( p ) > ci ( \u00af p ) for i E \u0393 ( T0 ) .\n( Recall that the payments have the restriction that player i can only pay for edges on the path from s to ti . )\nWe will now define the coalition payments \u00af p. Let ci ( \u00af p , T0 e \u2208 Te \u00af pi ( e ) be the payments of player i for the subtree T0e .\nWe will show that for every subtree T0 e , ci ( \u00af p , Te0 U { e } ) < ci ( p ) , and hence ci ( \u00af p ) < ci ( p ) .\nConsider the following bottom up process that defines \u00af p .\nWe assign the payments of edge e in T0 , after we assign payments to all the edges in T0e .\nThis implies that when we assign payments for e , we have that the sum of the payments in Te0 is equal to\nknow that c ( Te0 U { e } ) = c ( T0e ) + ce < P ( T0e ) .\nTherefore , we can update the payments p \u00af of players i E \u0393 ( T0e ) , by setting\nwhere we used the fact that E e ) .\nSince ce < P ( \u0393 ( T0e ) ) \u2212 c ( T0e ) it follows that ci ( \u00af p , Te0 U { e } ) < ci ( p ) ."} {"id": "C-6", "title": "", "abstract": "", "keyphrases": ["spectrum content manag system", "continu media storag", "home-network scenario", "applic program interfac", "content distribut network", "uniform resourc locat", "polici manag", "network enabl dvr", "high-perform databas system", "carrier-grade spectrum manag", "distribut content manag"], "prmu": [], "lvl-1": "Design and Implementation of a Distributed Content Management System C. D. Cranor, R. Ethington, A. Sehgal\u2020 , D. Shur, C. Sreenan\u2021 and J.E. van der Merwe AT&T Labs - Research \u2020 University of Kentucky \u2021 University College Cork Florham Park, NJ, USA Lexington, KY, USA Cork, Ireland ABSTRACT The convergence of advances in storage, encoding, and networking technologies has brought us to an environment where huge amounts of continuous media content is routinely stored and exchanged between network enabled devices.\nKeeping track of (or managing) such content remains challenging due to the sheer volume of data.\nStoring live continuous media (such as TV or radio content) adds to the complexity in that this content has no well defined start or end and is therefore cumbersome to deal with.\nNetworked storage allows content that is logically viewed as part of the same collection to in fact be distributed across a network, making the task of content management all but impossible to deal with without a content management system.\nIn this paper we present the design and implementation of the Spectrum content management system, which deals with rich media content effectively in this environment.\nSpectrum has a modular architecture that allows its application to both stand-alone and various networked scenarios.\nA unique aspect of Spectrum is that it requires one (or more) retention policies to apply to every piece of content that is stored in the system.\nThis means that there are no eviction policies.\nContent that no longer has a retention policy applied to it is simply removed from the system.\nDifferent retention policies can easily be applied to the same content thus naturally facilitating sharing without duplication.\nThis approach also allows Spectrum to easily apply time based policies which are basic building blocks required to deal with the storage of live continuous media, to content.\nWe not only describe the details of the Spectrum architecture but also give typical use cases.\nCategories and Subject Descriptors C.2.4 [Computer Systems Organization]: Computer-communication Networks-distributed systems; H.3.4 [Information Systems]: Information Storage and Retrieval-systems and software General Terms Design, Management 1.\nINTRODUCTION Manipulating and managing content is and has always been one of the primary functions of a computer.\nInitial computing applications include text formatters and program compilers.\nContent was initially managed by explicit user interaction through the use of files and filesystems.\nAs technology has advanced, both the types of content and the way people wish to use it have greatly changed.\nNew content types such as continuous multimedia streams have become commonplace due to the convergence of advances in storage, encoding, and networking technologies.\nFor example, by combining improvements in storage and encoding, it is now possible to store many hours of TV-quality encoded video on a single disk drive.\nThis has led to the introduction of stand alone digital video recording or personal video recording (PVR) systems such as TiVO [8] and ReplayTV [7].\nAnother example is the combination of encoding and broadband networking technology.\nThis combination has allowed users to access and share multimedia content in both local and remote area networks with the network itself acting as a huge data repository.\nThe proliferation of high quality content enabled by these advances in storage, encoding, and networking technology creates the need for new ways to manipulate and manage the data.\nThe focus of our work is on the storage of media rich content and in particular the storage of continuous media content in either pre-packaged or live forms.\nThe need for content management in this area is apparent when one consider the following: \u2022 Increases in the capacity and decreases in the cost of storage means that even modest desktop systems today have the ability to store massive amounts of content.\nManaging such content manually (or more correctly manual non-management of such content) lead to great inefficiencies where unwanted and forgotten content waste storage and where wanted content cannot be found.\n\u2022 While true for all types of content the storage of continuous media content is especially problematic.\nFirst continuous media content is still very demanding in terms of storage resources which means that a policy-less approach to storing it will not work for all but the smallest systems.\nSecond, the storing of live content such as TV or radio is inherently problematic as these signals are continuous streams with no endpoints.\nThis means that before one can even think about managing such content there is a need to abstract it into something that could be manipulated and managed.\n4 \u2022 When dealing with stored continuous media there is a need to manage such content at both a fine-grained as well as an aggregate level.\nFor example, an individual PVR user wanting to keep only the highlights of a particular sporting event should not be required to have to store the content pertaining to the complete event.\nAt the same time the user might want to think of content in the aggregate, e.g. remove all of the content that I have not watched for the last month except that content which was explicitly marked for archival.\n\u2022 As indicated above, trying to keep track of content on a standalone system without a content management system is very difficult.\nHowever, when the actual storage devices are distributed across a network the task of keeping track of content is almost impossible.\nThis scenario is increasingly common in network based content distribution systems and is likely to also become important in home-networking scenarios.\nIt would seem clear then that a content management system that can efficiently handle media rich content while also exploiting the networked capability of storage devices is needed.\nThis system should allow efficient storage of and access to content across heterogeneous network storage devices according to user preferences.\nThe content management system should translate user preferences into appropriate low-level storage policies and should allow those preferences to be expressed at a fine level of granularity (while not requiring it in general).\nThe content management system should allow the user to manipulate and reason about (i.e. change the storage policy associated with) the storage of (parts of) continuous media content.\nAddressing this distributed content management problem is difficult due to the number of requirements placed on the system.\nFor example: \u2022 The content management system must operate on a large number of heterogeneous systems.\nIn some cases the system may be managing content stored on a local filesystem, while in others the content may be stored on a separate network storage appliance.\nThe content manager may be responsible for implementing the policies it uses to reference content or that role may be delegated to a separate computer.\nA application program interface (API) and associated network protocols are needed in order for the content management system to provide a uniform interface.\n\u2022 The content management system should be flexible and be able to handle differing requirements for content management policies.\nThese policies reflect what content should be obtained, when it should be fetched, how long it should be retained, and under what circumstances it should be discarded.\nThis means that the content management system should allow multiple applications to reference content with a rich set of policies and that it should all work together seamlessly.\n\u2022 The content management system needs to be able to monitor references for content and use that information to place content in the right location in the network for efficient application access.\n\u2022 The content management system must handle the interaction between implicit and explicit population of content at the network edge.\n\u2022 The content system must be able to efficiently manage large sets of content, including continuous streams.\nIt needs to be able to package this content in such a way that it is convenient for users to access.\nTo address these issues we have designed and implemented the Spectrum content management system architecture.\nOur layered architecture is flexible - its API allows the layers to reside either on a single computer or on multiple networked heterogeneous computers.\nIt allows multiple applications to reference content using differing policies.\nNote that the Spectrum architecture assumes the existence of a content distribution network (CDN) that can facilitate the efficient distribution of content (for example, the PRISM CDN architecture [2]).\nThe rest of this paper is organized as follows.\nSection 2 describes the architecture of our content management system.\nIn Section 3 we describe both our implementation of the Spectrum architecture and examples of its use.\nRelated work is described in Section 4, and Section 5 contains our conclusion and suggestions for future work.\n2.\nTHE SPECTRUM DISTRIBUTED CONTENT MANAGEMENT SYSTEM ARCHITECTURE The Spectrum architecture consists of three distinct management layers that may or may not be distributed across multiple machines, as shown in Figure 1.\nThe three layers are: content manager: contains application specific information that is used to manage all of an application``s content according to user preferences.\nFor example, in a personal video recorder (PVR) application the content manager receives requests for content from a user interface and interacts with the lower layers of the Spectrum architecture to store and manage content on the device.\npolicy manager: implements and enforces various storage polices that the content manager uses to refer to content.\nThe policy manager exports an interface to the content manager that allows the content manager to request that a piece content be treated according to a specific policy.\nSpectrum allows for arbitrary policies to be realized by providing a fixed set of base-policy templates that can easily be parameterized.\nIt is our belief that for most implementations this will be adequate (if not, Spectrum can easily be extended to dynamically load new base-policy template code at run time).\nA key aspect of the policy manager is that it allows different policies to be simultaneously applied to the same content (or parts of the same content).\nFurthermore content can only exist in the system so long as it is referenced by at least one existing policy.\nPolicy conflicts are eliminated by having the policy manager deal exclusively with retention policies rather than with a mix of retention and eviction policies.\nThis means that content with no policy associated with it is immediately and automatically removed from the system.\nThis approach allows us to naturally support sharing of content across different policies which is critical to the efficient storage of large objects.\nNote that a key difference between the content manager and the policy manager is that the content manager manages references to multiple pieces of content, i.e. it has an applicationview of content.\nOn the other hand, the policy manager is only concerned with the policy used to manage standalone pieces of content.\nFor example, in a PVR application, the content manager layer would know about the different groups of managed content such as keep-indefinitely, keep for one day, and keep if available diskspace.\nHowever, at the policy manager level, each piece of content has 5 Content Manager Policy Manager Storage Manager Content Manager Content Manager Content Manager Policy Manager Policy Manager Policy Manager Storage Manager Storage Manager Storage Manager Remote Invocation Figure 1: The components of the Spectrum architecture and the four ways they can be configured its own policy (or policies) applied to it and is independent from other content.\nstorage manager: stores content in an efficient manner while facilitating the objectives of the higher layers.\nSpecifically the storage manager stores content in sub-object chunks.\nThis approach has advantages for the efficient retrieval of content but more importantly allows policies to be applied at a subobject level which is critically important when dealing with very large objects such as parts of continuous media, e.g. selected pieces of TV content being stored on a PVR.\nNote that the storage manager has no knowledge of the policies being used by the content and policy managers.\nAnother unique part of our approach is that the interfaces between the layers can either be local or distributed.\nFigure 1 shows the four possible cases.\nThe case on the far left of the Figure shows the simplest (non-distributed) case where all the layers are implemented on a single box.\nThis configuration would be used in selfcontained applications such as PVRs.\nThe next case over corresponds to the case where there is a centralized content manager that controls distributed storage devices each of which is responsible for implementing policy based storage.\nIn this case although the remote devices are controlled by the central manager they operate much more independently.\nFor example, once they receive instructions from the central manager they typically operate in autonomous fashion.\nAn example of this type of configuration is a content distribution network (CDN) that distributes and stores content based on a schedule determined by some centralized controller.\nFor example, the CDN could pre-populate edge devices with content that is expected to be very popular or distribute large files to branch offices during off-peak hours in a bandwidth constrained enterprise environment.\nAllowing a single policy manager to control several storage managers leads to the next combination of functions and the most distributed case.\nThe need for this sort of separation might occur for scalability reasons or when different specialized storage devices or appliances are required to be controlled by a single policy manager.\nThe final case shows a content manager combined with a policy manager controlling a remote storage manager.\nThis separation would be possible if the storage manager is somewhat autonomous and does not require continuous fine grained control by the policy manager.\nWe now examine the function of the three layers in detail.\n2.1 Content Manager The content manager layer is the primary interface through which specific applications use the Spectrum architecture.\nAs such the content manager layer provides an API for the application to manipulate all aspects of the Spectrum architecture at different levels of granularity.\nThe content manager API has functions that handle: Physical devices: This set of functions allows physical storage devices to be added to Spectrum thereby putting them under control of the content manager and making the storage available to the system.\nPhysical devices can be local or remote - this is the only place in the architecture where the application is required to be aware of this distinction.\nOnce a device is mapped into the application through this interface, the system tracks its type and location.\nUsers simply refer to the content through an application-provided label.\nStores: Stores are subsets of physical storage devices.\nThrough these functions an application can create a store on a physical device and assign resources (e.g. disk space) to it.\nStores can only be created in physical devices that are mapped into the system.\nPolicy Groups: Policy groups are the means whereby an application specifies, instantiates, and modifies the policies that are applied to Spectrum content.\nTypical usage of this set of functions is to select one of a small set of base policies and to parameterize this specific instance of the policy.\nPolicy groups are created within existing stores in the system.\nThe Spectrum architecture has policies that are normally associated with storage that aim to optimize disk usage.\nIn addition a set of policies that take a sophisticated time specification enable storage that is cognizant of time.\nFor example, a simple time-based policy could evict content from the system at a certain absolute or relative time.\nA slightly more involved time-based policy enabled by the Spectrum architecture could allow content to be stored in rolling window of a number of hours (for example, the most recent N-number of hours is kept in the system).\nTime-based polices are of particular use when dealing with continuous content like a live broadcast.\n6 Content: At the finest level of granularity content can be added to or removed from the system.\nContent is specified to the system by means of a uniform resource locator (URL) which concisely indicates the location of the content as well as the protocol to be used to retrieve it.\nOptionally a time specification can be associated with content.\nThis allows content to be fetched into the system at some future time, or at future time intervals.\nAgain, this is particularly useful for dealing with the storage and management of live content.\n2.2 Policy Manager The policy manager layer of the Spectrum architecture has two main types of API functions.\nFirst, there are functions that operate on managed storage areas and policy-based references (prefs) to content stored there.\nSecond, there are sets of functions used to implement each management policy.\nThe first class of functions is used by the content manager layer to access storage.\nOperations include: create, open, and close: These operations are used by the content manager to control its access to storage.\nThe policy manager``s create operation is used to establish contact with a store for the first time.\nOnce this is done, the store can be open and closed using the appropriate routines.\nNote that the parameters used to create a store contain information on how to reach it.\nFor example, local stores have a path associated with them, while remote stores have a remote host and remote path associated with them.\nThe information only needs to be passed to the policy manager once at create time.\nFor open operations, the policy manager will use cached information to contact the store.\nlookup: The lookup operation provides a way for the content manager to query the policy manager about what content is currently present for a given URL.\nFor continuous media time ranges of present media will be returned.\nresource: The resource routines are used to query the policy manager about its current resource usage.\nThere are two resource routines: one that applies to the store as a whole and another that applies to a particular policy reference.\nThe resource API is extensible, we currently support queries on disk usage and I/O load.\npref establish/update: The pref establish operation is used by the content manager to reference content on the store.\nIf the content is not present, this call will result in the content being fetched (or being scheduled to be fetched if the content is not currently available).\nParameters of this function include the URL to store it under, the URL to fetch data from if it is not present, the policy to store the content under, and the arguments used to parameterize the policy.\nThe result of a successful pref establish operation is a policy reference ID string.\nThis ID can be used with the update operation to either change the storage policy parameters or delete the reference entirely.\nThe second group of policy manager functions are used to implement all the polices supported by Spectrum.\nWe envision a small set of base-level policy functions that can be parameterized to produce a wide range of storage polices.\nFor example, a policy that implements recording a repeating time window can be parameterized to function daily, weekly, or monthly.\nNote that the policy manager is only concerned with executing a specific policy.\nThe higher-level reasons for choosing a given policy are handled by the content and application manager.\nA base policy is implemented using six functions: establish: called when a pref is established with the required URLs and base policy``s parameters.\nThe establish routine references any content already present in the store and then determines the next time it needs to take action (e.g. start a download) and schedules a callback for that time.\nIt can also register to receive callbacks if new content is received for a given URL.\nupdate: called to change the parameters of a pref, or to discard the policy reference.\nnewclip: called when a chunk of new content is received for a URL of interest.\nThe base policy typically arranges for newclip to be called for a given URL when the pref is established.\nWhen newclip is called, the base policy checks its parameters to determine if it wishes to add a reference to the clip just received.\ncallback: called when the pref schedules a timer-based callback.\nThis is a useful wakeup mechanism for prefs that need to be idle for a long period of time (e.g. between programs).\nboot/shutdown: called when the content management system is booting or shutting down.\nThe boot operation is typically used to schedule initial callbacks or start I/O operations.\nThe shutdown operation is used to gracefully shutdown I/O streams and save state.\n2.3 Storage Manager The role of Spectrum``s storage manager is to control all I/O operations associated with a given store.\nSpectrum``s storage manager supports storing content both on a local filesystem and on a remote fileserver (e.g. a storage appliance).\nFor continuous media, at the storage manager level content is stored as a collection of time-based chunks.\nDepending on the underlying filesystem, a chunk could correspond to a single file or a data node in a storage database.\nThe two main storage manager operations are input and output.\nThe input routine is used to store content in a store under a given name.\nThe output routine is used to send data from the store to a client.\nFor streaming media both the input and output routines take time ranges that schedule when the I/O operation should happen, and both routines return an I/O handle that can be used to modify or cancel the I/O request in the future.\nMuch like the policy manager, the storage manager also provides API functions to create, open, and close stores.\nIt also supports operations to query the resource usages and options supported by the store.\nFinally, the storage manager also has a discard routine that may be used by the policy manager to inform the store to remove content from the store.\n3.\nIMPLEMENTATION AND USE CASES In this section we describe our implementation of Spectrum and describe how it can be used.\n3.1 Implementation We have implemented Spectrum``s three layers in C as part of a library that can be linked with Spectrum-based applications.\nEach layer keeps track of its state through a set of local data files that persist across reboots, thus allowing Spectrum to smoothly handle power cycles.\nFor layers that reside on remote systems (e.g. a remote store) only the meta-information needed to contact the remote 7 Content Manager Policy Manager Storage Manager Storage Fetcher Program Listings Graphical User Interface Network Enabled DVR Program Information Content DVR Application Figure 2: Spectrum in a Network Enabled DVR node is stored locally.\nOur test application uses a local policy and storage manager to fetch content and store it in a normal Unixbased filesystem.\nTo efficiently handle communications with layers running on remote systems, all Spectrum``s API calls support both synchronous and asynchronous modes through a uniform interface defined by the reqinfo structure.\nEach API call takes a pointer to a reqinfo structure as one of its arguments.\nThis structure is used to hold the call state and return status.\nFor async calls, the reqinfo also contains a pointer to a callback function.\nTo use a Spectrum API function, the caller first chooses either the sync or async mode and allocates a reqinfo structure.\nFor sync calls, the reqinfo can be allocated on the stack, otherwise it is allocated with malloc.\nFor async calls, a callback function must be provided when the reqinfo is allocated.\nNext the caller invokes the desired Spectrum API function passing the reqinfo structure as an argument.\nFor sync calls, the result of the calls is returned immediately in the reqinfo structure.\nFor successful async calls, a call in progress value is returned.\nLater, when the async call completes or a timeout occurs, the async callback function is called with the appropriate information needed to complete processing.\nThe modular/layered design of the Spectrum architecture simplifies the objective of distribution of functionality.\nFurthermore, communication between functions is typically of a master-slave(s) nature.\nThis means that several approaches to distributed operation are possible that would satisfy the architectural requirements.\nIn our implementation we have opted to realize this functionality with a simple modular design.\nWe provide a set of asynchronous remote access stub routines that allow users to select the transport protocol to use and to select the encoding method that should be used with the data to be transferred.\nTransport protocols can range simple protocols such as UDP up to more complex protocols such as HTTP.\nWe currently are using plain TCP for most of our transport.\nFunction calls across the different Spectrum APIs can be encoded using a variety of formats include plain text, XDR, and XML.\nWe are currently using the eXpat XML library [4] to encode our calls.\nWhile we are current transferring our XML encoded messages using a simple TCP connection, in a real world setting this can easily be replaced with an implementation based on secure sockets layer (SSL) to improve security by adding SSL as a transport protocol.\nAn important aspect of Spectrum is that it can manage content based on a given policy across heterogenous platforms.\nAs we explained previously in Section 2.2, envision a small set of base-level policy functions that can be parameterized to produce a wide range of storage polices.\nIn order for this to work properly, all Spectrumbased applications must understand the base-level policies and how they can be parameterized.\nTo address this issue, we treat each base-level policy as if it was a separate program.\nEach base-level policy should have a well known name and command line options for parameterization.\nIn fact, in our implementation we pass parameters to base-level policies as a string that can be parsed using a getopt-like function.\nThis format is easily understood and provides portability since byte order is not an issue in a string.\nSince this part of Spectrum is not on the critical data path, this type of formatting is not a performance issue.\n3.2 Using the Spectrum Content Management System In this section we show two examples of the use of the Spectrum Content Management System in our environment.\nThe focus of our previous work has been content distribution for streaming media content [2] and network enabled digital video recording [3].\nThe Spectrum system is applicable to both scenarios as follows.\nFigure 2 shows the Network Enabled DVR (NED) architecture.\nIn this case all layers of the Spectrum architecture reside on the same physical device in a local configuration.\nThe DVR application obtains program listings from some network source, deals with user presentation through a graphical user interface (GUI), and interface with the Spectrum system through the content management layer APIs.\nThis combination of higher level functions allows the user to select both content to be stored and what storage policies to 8 Content Manager Centralized Content Management station Content InformationUser Interface Policy Manager Storage Manager Storage Fetcher Edge Portal Server Policy Manager Storage Manager Storage Fetcher Edge Portal Server Distributed Content To Media Endpoints To Media Endpoints Figure 3: Spectrum in a Content Distribution Architecture apply to such content.\nObtaining the content (through the network or locally) and the subsequent storage on the local system is then handled by the policy and storage managers.\nThe use of Spectrum in a streaming content distribution architecture (e.g. PRISM [2]) is depicted in Figure 3.\nIn this environment streaming media content (both live, canned-live and on-demand) is being distributed to edge portals from where streaming endpoints are being served.\nIn our environment content distribution and storage is done from a centralized content management station which controls several of the edge portals.\nThe centralized station allows administrators to manage the distribution and storage of content without requiring continuous communication between the content manager and the edge devices, i.e. once instructions have been given to edge devices they can operate independently until changes are to be made.\n3.3 Spectrum Operational Example To illustrate how Spectrum handles references to content, consider a Spectrum-based PVR application programmed to store one days worth of streaming content in a rolling window.\nTo set up the rolling window, the application would use the content manager API to create a policy group and policy reference to the desired content.\nThe establishment of the one-day rolling window policy reference would cause the policy manger to ask the storage manager to start receiving the stream.\nAs each chunk of streaming data arrives, the policy manager executes the policy reference``s newclip function.\nThe newclip function adds a reference to each arriving chunk, and schedules a callback a day later.\nAt that time, the policy will drop its now day-old reference to the content and the content will be discarded unless it is referenced by some other policy.\nNow, consider the case where the user decides to save part of the content (e.g. a specific program) in the rolling window for an extra week.\nTo do this, the application requests that the content manager add an additional new policy reference to the part of the content to preserved.\nThus, the preserved content has two references to it: one from the rolling window and one from the request to preserve the content for an additional week.\nAfter one day the reference from the rolling window will be discarded, but the content will be 9 ref2, etc. base data url1 url2 (media files...) (media files...) meta store (general info...) url1 chunks prefs ranges media chunks, etc.url2 poly host ref1 ref1.files ref1.state Figure 4: Data layout of Spectrum policy store preserved by the second reference.\nAfter the additional week has past, the callback function for the second reference will be called.\nThis function will discard the remaining reference to the content and as there are no remaining references the content will be freed.\nIn order to function in scenarios like the ones described above, Spectrum``s policy manager must manage and maintain all the references to various chunks of media.\nThese references are persistent and thus must be able to survive even if the machine maintaining them is rebooted.\nOur Spectrum policy manager implementation accomplishes this using the file and directory structure shown in Figure 4.\nThere are three classes of data stored, and each class has its own top level directory.\nThe directories are: data: this directory is used by the storage manager to store each active URL``s chunks of media.\nThe media files can be encoded in any format, for example MPEG, Windows Media, or QuickTime.\nNote that this directory is used only if the storage manager is local.\nIf the policy manager is using an external storage manager (e.g. a storage appliance), then the media files are stored remotely and are only remotely referenced by the policy manager.\nmeta: this directory contains general meta information about the storage manager being used and the data it is storing.\nGeneral information is stored in the store subdirectory and includes the location of the store (local or remote) and information about the types of chunks of data the store can handle.\nThe meta directory also contains a subdirectory per-URL that contains information about the chunks of data stored.\nThe chunks file contains a list of chunks currently stored and their reference counts.\nThe prefs file contains a list of active policy references that point to this URL.\nThe ranges file contains a list of time ranges of data currently stored.\nFinally, the media file describes the format of the media being stored under the current URL.\npoly: this directory contains a set of host subdirectories.\nEach host subdirectory contains the set of policy references created by that host.\nInformation on each policy reference is broken up into three files.\nFor example, a policy reference named ref1 would be stored in ref1, ref1.files, and ref1.state.\nThe ref1 file contains information about the policy reference that does not change frequently.\nThis information includes the base-policy and the parameters used to create the reference.\nThe ref1.files file contains the list of references to chunks that pref ref1 owns.\nFinally, the ref1.state file contains optional policy-specific state information that can change over time.\nTogether, these files and directories are used to track references in our implementation of Spectrum.\nNote that other implementations are possible.\nFor example, a carrier-grade Spectrum manager might store all its policy and reference information in a high-performance database system.\n10 4.\nRELATED WORK Several authors have addressed the problem of the management of content in distributed networks.\nMuch of the work focuses on the policy management aspect.\nFor example in [5], the problem of serving multimedia content via distributed servers is considered.\nContent is distributed among server resources in proportion to user demand using a Demand Dissemination Protocol.\nThe performance of the scheme is benchmarked via simulation.\nIn [1] content is distributed among sub-caches.\nThe authors construct a system employing various components, such as a Central Router, Cache Knowledge base, Subcaches, and a Subcache eviction judge.\nThe Cache Knowledge base allows sophisticated policies to be employed.\nSimulation is used to compare the proposed scheme with well-known replacement algorithms.\nOur work differs in that we are considering more than the policy management aspects of the problem.\nAfter carefully considering the required functionality to implement content management in the networked environment, we have partitioned the system into three simple functions, namely Content manager, Policy manager and Storage manager.\nThis has allowed us to easily implement and experiment with a prototype system.\nOther related work involves so called TV recommendation systems which are used in PVRs to automatically select content for users, e.g. [6].\nIn the case where Spectrum is used in a PVR configuration this type of system would perform a higher level function and could clearly benefit from the functionalities of the Spectrum architecture.\nFinally, in the commercial CDN environment vendors (e.g. Cisco and Netapp) have developed and implemented content management products and tools.\nUnlike the Spectrum architecture which allows edge devices to operate in a largely autonomous fashion, the vendor solutions typically are more tightly coupled to a centralized controller and do not have the sophisticated time-based operations offered by Spectrum.\n5.\nCONCLUSION AND FUTURE WORK In this paper we presented the design and implementation of the Spectrum content management architecture.\nSpectrum allows storage policies to be applied to large volumes of content to facilitate efficient storage.\nSpecifically, the system allows different policies to be applied to the same content without replication.\nSpectrum can also apply policies that are time-aware which effectively deals with the storage of continuous media content.\nFinally, the modular design of the Spectrum architecture allows both stand-alone and distributed realizations so that the system can be deployed in a variety of applications.\nThere are a number of open issues that will require future work.\nSome of these issues include: \u2022 We envision Spectrum being able to manage content on systems ranging from large CDNs down to smaller appliances such as TiVO [8].\nIn order for these smaller systems to support Spectrum they will require networking and an external API.\nWhen that API becomes available, we will have to work out how it can be fit into the Spectrum architecture.\n\u2022 Spectrum names content by URL, but we have intentionally not defined the format of Spectrum URLs, how they map back to the content``s actual name, or how the names and URLs should be presented to the user.\nWhile we previously touched on these issues elsewhere [2], we believe there is more work to be done and that consensus-based standards on naming need to be written.\n\u2022 In this paper we``ve focused on content management for continuous media objects.\nWe also believe the Spectrum architecture can be applied to any type of document including plain files, but we have yet to work out the details necessary to support this in our prototype environment.\n\u2022 Any project that helps allow multimedia content to be easily shared over the Internet will have legal hurdles to overcome before it can achieve widespread acceptance.\nAdapting Spectrum to meet legal requirements will likely require more technical work.\n6.\nREFERENCES [1] K. .\nCheng and Y. Kambayashi.\nMulticache-based Content Management for Web Caching.\nProceedings of the First International Conference on Web Information Systems Engineering, Jume 2000.\n[2] C. Cranor, M. Green, C. Kalmanek, D. Shur, S. Sibal, C. Sreenan, and J. van der Merwe.\nPRISM Architecture: Supporting Enhanced Streaming Services in a Content Distribution Network.\nIEEE Internet Computing, July/August 2001.\n[3] C. Cranor, C. Kalmanek, D. Shur, S. Sibal, C. Sreenan, and J. van der Merwe.\nNED: a Network-Enabled Digital Video Recorder.\n11th IEEE Workshop on Local and Metropolitan Area Networks, March 2001.\n[4] eXpat.\nexpat.sourceforge.net.\n[5] Z. Ge, P. Ji, and P. Shenoy.\nA Demand Adaptive and Locality Aware (DALA) Streaming Media Server Cluster Architecture.\nNOSSDAV, May 2002.\n[6] K. Kurapati and S. Gutta and D. Schaffer and J. Martino and J. Zimmerman.\nA multi-agent TV recommender.\nProceedings of the UM 2001 workshop, July 2001.\n[7] ReplayTV.\nwww.sonicblue.com.\n[8] TiVo.\nwww.tivo.com.\n11", "lvl-3": "Design and Implementation of a Distributed Content Management System\nABSTRACT\nThe convergence of advances in storage , encoding , and networking technologies has brought us to an environment where huge amounts of continuous media content is routinely stored and exchanged between network enabled devices .\nKeeping track of ( or managing ) such content remains challenging due to the sheer volume of data .\nStoring `` live '' continuous media ( such as TV or radio content ) adds to the complexity in that this content has no well defined start or end and is therefore cumbersome to deal with .\nNetworked storage allows content that is logically viewed as part of the same collection to in fact be distributed across a network , making the task of content management all but impossible to deal with without a content management system .\nIn this paper we present the design and implementation of the Spectrum content management system , which deals with rich media content effectively in this environment .\nSpectrum has a modular architecture that allows its application to both stand-alone and various networked scenarios .\nA unique aspect of Spectrum is that it requires one ( or more ) retention policies to apply to every piece of content that is stored in the system .\nThis means that there are no eviction policies .\nContent that no longer has a retention policy applied to it is simply removed from the system .\nDifferent retention policies can easily be applied to the same content thus naturally facilitating sharing without duplication .\nThis approach also allows Spectrum to easily apply time based policies which are basic building blocks required to deal with the storage of live continuous media , to content .\nWe not only describe the details of the Spectrum architecture but also give typical use cases .\n1 .\nINTRODUCTION\nManipulating and managing content is and has always been one of the primary functions of a computer .\nInitial computing applications include text formatters and program compilers .\nContent was initially managed by explicit user interaction through the use of files and filesystems .\nAs technology has advanced , both the types of content and the way people wish to use it have greatly changed .\nNew content types such as continuous multimedia streams have become commonplace due to the convergence of advances in storage , encoding , and networking technologies .\nFor example , by combining improvements in storage and encoding , it is now possible to store many hours of TV-quality encoded video on a single disk drive .\nThis has led to the introduction of stand alone digital video recording or personal video recording ( PVR ) systems such as TiVO [ 8 ] and ReplayTV [ 7 ] .\nAnother example is the combination of encoding and broadband networking technology .\nThis combination has allowed users to access and share multimedia content in both local and remote area networks with the network itself acting as a huge data repository .\nThe proliferation of high quality content enabled by these advances in storage , encoding , and networking technology creates the need for new ways to manipulate and manage the data .\nThe focus of our work is on the storage of media rich content and in particular the storage of continuous media content in either pre-packaged or `` live '' forms .\nThe need for content management in this area is apparent when one consider the following : \u2022 Increases in the capacity and decreases in the cost of storage means that even modest desktop systems today have the ability to store massive amounts of content .\nManaging such content manually ( or more correctly manual `` non-management '' of such content ) lead to great inefficiencies where `` unwanted '' and forgotten content waste storage and where `` wanted '' content can not be found .\n\u2022 While true for all types of content the storage of continuous media content is especially problematic .\nFirst continuous media content is still very demanding in terms of storage resources which means that a policy-less approach to storing it will not work for all but the smallest systems .\nSecond , the storing of `` live '' content such as TV or radio is inherently problematic as these signals are continuous streams with no endpoints .\nThis means that before one can even think about managing such content there is a need to abstract it into something that could be manipulated and managed .\n.\nWhen dealing with stored continuous media there is a need to manage such content at both a fine-grained as well as an aggregate level .\nFor example , an individual PVR user wanting to keep only the highlights of a particular sporting event should not be required to have to store the content pertaining to the complete event .\nAt the same time the user might want to think of content in the aggregate , e.g. remove all of the content that I have not watched for the last month except that content which was explicitly marked for archival .\n.\nAs indicated above , trying to keep track of content on a standalone system without a content management system is very difficult .\nHowever , when the actual storage devices are distributed across a network the task of keeping track of content is almost impossible .\nThis scenario is increasingly common in network based content distribution systems and is likely to also become important in home-networking scenarios .\nIt would seem clear then that a content management system that can efficiently handle media rich content while also exploiting the networked capability of storage devices is needed .\nThis system should allow efficient storage of and access to content across heterogeneous network storage devices according to user preferences .\nThe content management system should translate user preferences into appropriate low-level storage policies and should allow those preferences to be expressed at a fine level of granularity ( while not requiring it in general ) .\nThe content management system should allow the user to manipulate and reason about ( i.e. change the storage policy associated with ) the storage of ( parts of ) continuous media content .\nAddressing this distributed content management problem is difficult due to the number of requirements placed on the system .\nFor example : .\nThe content management system must operate on a large number of heterogeneous systems .\nIn some cases the system may be managing content stored on a local filesystem , while in others the content may be stored on a separate network storage appliance .\nThe content manager may be responsible for implementing the policies it uses to reference content or that role may be delegated to a separate computer .\nA application program interface ( API ) and associated network protocols are needed in order for the content management system to provide a uniform interface .\n.\nThe content management system should be flexible and be able to handle differing requirements for content management policies .\nThese policies reflect what content should be obtained , when it should be fetched , how long it should be retained , and under what circumstances it should be discarded .\nThis means that the content management system should allow multiple applications to reference content with a rich set of policies and that it should all work together seamlessly .\n.\nThe content management system needs to be able to monitor references for content and use that information to place content in the right location in the network for efficient application access .\n.\nThe content management system must handle the interaction between implicit and explicit population of content at the network edge .\n.\nThe content system must be able to efficiently manage large sets of content , including continuous streams .\nIt needs to be able to package this content in such a way that it is convenient for users to access .\nTo address these issues we have designed and implemented the Spectrum content management system architecture .\nOur layered architecture is flexible -- its API allows the layers to reside either on a single computer or on multiple networked heterogeneous computers .\nIt allows multiple applications to reference content using differing policies .\nNote that the Spectrum architecture assumes the existence of a content distribution network ( CDN ) that can facilitate the efficient distribution of content ( for example , the PRISM CDN architecture [ 2 ] ) .\nThe rest of this paper is organized as follows .\nSection 2 describes the architecture of our content management system .\nIn Section 3 we describe both our implementation of the Spectrum architecture and examples of its use .\nRelated work is described in Section 4 , and Section 5 contains our conclusion and suggestions for future work .\n2 .\nTHE SPECTRUM DISTRIBUTED CONTENT MANAGEMENT SYSTEM ARCHITECTURE\n2.1 Content Manager\n2.2 Policy Manager\n2.3 Storage Manager\n3 .\nIMPLEMENTATION AND USE CASES\n3.1 Implementation\n3.2 Using the Spectrum Content Management System\n3.3 Spectrum Operational Example\n4 .\nRELATED WORK\nSeveral authors have addressed the problem of the management of content in distributed networks .\nMuch of the work focuses on the policy management aspect .\nFor example in [ 5 ] , the problem of serving multimedia content via distributed servers is considered .\nContent is distributed among server resources in proportion to user demand using a Demand Dissemination Protocol .\nThe performance of the scheme is benchmarked via simulation .\nIn [ 1 ] content is distributed among sub-caches .\nThe authors construct a system employing various components , such as a Central Router , Cache Knowledge base , Subcaches , and a Subcache eviction judge .\nThe Cache Knowledge base allows sophisticated policies to be employed .\nSimulation is used to compare the proposed scheme with well-known replacement algorithms .\nOur work differs in that we are considering more than the policy management aspects of the problem .\nAfter carefully considering the required functionality to implement content management in the networked environment , we have partitioned the system into three simple functions , namely Content manager , Policy manager and Storage manager .\nThis has allowed us to easily implement and experiment with a prototype system .\nOther related work involves so called TV recommendation systems which are used in PVRs to automatically select content for users , e.g. [ 6 ] .\nIn the case where Spectrum is used in a PVR configuration this type of system would perform a higher level function and could clearly benefit from the functionalities of the Spectrum architecture .\nFinally , in the commercial CDN environment vendors ( e.g. Cisco and Netapp ) have developed and implemented content management products and tools .\nUnlike the Spectrum architecture which allows edge devices to operate in a largely autonomous fashion , the vendor solutions typically are more tightly coupled to a centralized controller and do not have the sophisticated time-based operations offered by Spectrum .\n5 .\nCONCLUSION AND FUTURE WORK\nIn this paper we presented the design and implementation of the Spectrum content management architecture .\nSpectrum allows storage policies to be applied to large volumes of content to facilitate efficient storage .\nSpecifically , the system allows different policies to be applied to the same content without replication .\nSpectrum can also apply policies that are `` time-aware '' which effectively deals with the storage of continuous media content .\nFinally , the modular design of the Spectrum architecture allows both stand-alone and distributed realizations so that the system can be deployed in a variety of applications .\nThere are a number of open issues that will require future work .\nSome of these issues include :\n\u2022 We envision Spectrum being able to manage content on systems ranging from large CDNs down to smaller appliances such as TiVO [ 8 ] .\nIn order for these smaller systems to support Spectrum they will require networking and an external API .\nWhen that API becomes available , we will have to work out how it can be fit into the Spectrum architecture .\n\u2022 Spectrum names content by URL , but we have intentionally\nnot defined the format of Spectrum URLs , how they map back to the content 's actual name , or how the names and URLs should be presented to the user .\nWhile we previously touched on these issues elsewhere [ 2 ] , we believe there is more work to be done and that consensus-based standards on naming need to be written .\n\u2022 In this paper we 've focused on content management for continuous media objects .\nWe also believe the Spectrum architecture can be applied to any type of document including plain files , but we have yet to work out the details necessary to support this in our prototype environment .\n\u2022 Any project that helps allow multimedia content to be easily shared over the Internet will have legal hurdles to overcome before it can achieve widespread acceptance .\nAdapting Spectrum to meet legal requirements will likely require more technical work .", "lvl-4": "Design and Implementation of a Distributed Content Management System\nABSTRACT\nThe convergence of advances in storage , encoding , and networking technologies has brought us to an environment where huge amounts of continuous media content is routinely stored and exchanged between network enabled devices .\nKeeping track of ( or managing ) such content remains challenging due to the sheer volume of data .\nStoring `` live '' continuous media ( such as TV or radio content ) adds to the complexity in that this content has no well defined start or end and is therefore cumbersome to deal with .\nNetworked storage allows content that is logically viewed as part of the same collection to in fact be distributed across a network , making the task of content management all but impossible to deal with without a content management system .\nIn this paper we present the design and implementation of the Spectrum content management system , which deals with rich media content effectively in this environment .\nSpectrum has a modular architecture that allows its application to both stand-alone and various networked scenarios .\nA unique aspect of Spectrum is that it requires one ( or more ) retention policies to apply to every piece of content that is stored in the system .\nThis means that there are no eviction policies .\nContent that no longer has a retention policy applied to it is simply removed from the system .\nDifferent retention policies can easily be applied to the same content thus naturally facilitating sharing without duplication .\nThis approach also allows Spectrum to easily apply time based policies which are basic building blocks required to deal with the storage of live continuous media , to content .\nWe not only describe the details of the Spectrum architecture but also give typical use cases .\n1 .\nINTRODUCTION\nManipulating and managing content is and has always been one of the primary functions of a computer .\nInitial computing applications include text formatters and program compilers .\nContent was initially managed by explicit user interaction through the use of files and filesystems .\nAs technology has advanced , both the types of content and the way people wish to use it have greatly changed .\nNew content types such as continuous multimedia streams have become commonplace due to the convergence of advances in storage , encoding , and networking technologies .\nAnother example is the combination of encoding and broadband networking technology .\nThis combination has allowed users to access and share multimedia content in both local and remote area networks with the network itself acting as a huge data repository .\nThe proliferation of high quality content enabled by these advances in storage , encoding , and networking technology creates the need for new ways to manipulate and manage the data .\nThe focus of our work is on the storage of media rich content and in particular the storage of continuous media content in either pre-packaged or `` live '' forms .\n\u2022 While true for all types of content the storage of continuous media content is especially problematic .\nFirst continuous media content is still very demanding in terms of storage resources which means that a policy-less approach to storing it will not work for all but the smallest systems .\nSecond , the storing of `` live '' content such as TV or radio is inherently problematic as these signals are continuous streams with no endpoints .\nThis means that before one can even think about managing such content there is a need to abstract it into something that could be manipulated and managed .\n.\nWhen dealing with stored continuous media there is a need to manage such content at both a fine-grained as well as an aggregate level .\nFor example , an individual PVR user wanting to keep only the highlights of a particular sporting event should not be required to have to store the content pertaining to the complete event .\n.\nAs indicated above , trying to keep track of content on a standalone system without a content management system is very difficult .\nHowever , when the actual storage devices are distributed across a network the task of keeping track of content is almost impossible .\nThis scenario is increasingly common in network based content distribution systems and is likely to also become important in home-networking scenarios .\nIt would seem clear then that a content management system that can efficiently handle media rich content while also exploiting the networked capability of storage devices is needed .\nThis system should allow efficient storage of and access to content across heterogeneous network storage devices according to user preferences .\nThe content management system should translate user preferences into appropriate low-level storage policies and should allow those preferences to be expressed at a fine level of granularity ( while not requiring it in general ) .\nThe content management system should allow the user to manipulate and reason about ( i.e. change the storage policy associated with ) the storage of ( parts of ) continuous media content .\nAddressing this distributed content management problem is difficult due to the number of requirements placed on the system .\nFor example : .\nThe content management system must operate on a large number of heterogeneous systems .\nIn some cases the system may be managing content stored on a local filesystem , while in others the content may be stored on a separate network storage appliance .\nThe content manager may be responsible for implementing the policies it uses to reference content or that role may be delegated to a separate computer .\nA application program interface ( API ) and associated network protocols are needed in order for the content management system to provide a uniform interface .\n.\nThe content management system should be flexible and be able to handle differing requirements for content management policies .\nThese policies reflect what content should be obtained , when it should be fetched , how long it should be retained , and under what circumstances it should be discarded .\nThis means that the content management system should allow multiple applications to reference content with a rich set of policies and that it should all work together seamlessly .\n.\nThe content management system needs to be able to monitor references for content and use that information to place content in the right location in the network for efficient application access .\n.\nThe content management system must handle the interaction between implicit and explicit population of content at the network edge .\n.\nThe content system must be able to efficiently manage large sets of content , including continuous streams .\nIt needs to be able to package this content in such a way that it is convenient for users to access .\nTo address these issues we have designed and implemented the Spectrum content management system architecture .\nIt allows multiple applications to reference content using differing policies .\nNote that the Spectrum architecture assumes the existence of a content distribution network ( CDN ) that can facilitate the efficient distribution of content ( for example , the PRISM CDN architecture [ 2 ] ) .\nSection 2 describes the architecture of our content management system .\nIn Section 3 we describe both our implementation of the Spectrum architecture and examples of its use .\n4 .\nRELATED WORK\nSeveral authors have addressed the problem of the management of content in distributed networks .\nMuch of the work focuses on the policy management aspect .\nFor example in [ 5 ] , the problem of serving multimedia content via distributed servers is considered .\nContent is distributed among server resources in proportion to user demand using a Demand Dissemination Protocol .\nThe performance of the scheme is benchmarked via simulation .\nIn [ 1 ] content is distributed among sub-caches .\nThe Cache Knowledge base allows sophisticated policies to be employed .\nSimulation is used to compare the proposed scheme with well-known replacement algorithms .\nOur work differs in that we are considering more than the policy management aspects of the problem .\nAfter carefully considering the required functionality to implement content management in the networked environment , we have partitioned the system into three simple functions , namely Content manager , Policy manager and Storage manager .\nThis has allowed us to easily implement and experiment with a prototype system .\nOther related work involves so called TV recommendation systems which are used in PVRs to automatically select content for users , e.g. [ 6 ] .\nFinally , in the commercial CDN environment vendors ( e.g. Cisco and Netapp ) have developed and implemented content management products and tools .\n5 .\nCONCLUSION AND FUTURE WORK\nIn this paper we presented the design and implementation of the Spectrum content management architecture .\nSpectrum allows storage policies to be applied to large volumes of content to facilitate efficient storage .\nSpecifically , the system allows different policies to be applied to the same content without replication .\nSpectrum can also apply policies that are `` time-aware '' which effectively deals with the storage of continuous media content .\nFinally , the modular design of the Spectrum architecture allows both stand-alone and distributed realizations so that the system can be deployed in a variety of applications .\nThere are a number of open issues that will require future work .\nSome of these issues include :\n\u2022 We envision Spectrum being able to manage content on systems ranging from large CDNs down to smaller appliances such as TiVO [ 8 ] .\nIn order for these smaller systems to support Spectrum they will require networking and an external API .\nWhen that API becomes available , we will have to work out how it can be fit into the Spectrum architecture .\n\u2022 Spectrum names content by URL , but we have intentionally\nnot defined the format of Spectrum URLs , how they map back to the content 's actual name , or how the names and URLs should be presented to the user .\n\u2022 In this paper we 've focused on content management for continuous media objects .\n\u2022 Any project that helps allow multimedia content to be easily shared over the Internet will have legal hurdles to overcome before it can achieve widespread acceptance .\nAdapting Spectrum to meet legal requirements will likely require more technical work .", "lvl-2": "Design and Implementation of a Distributed Content Management System\nABSTRACT\nThe convergence of advances in storage , encoding , and networking technologies has brought us to an environment where huge amounts of continuous media content is routinely stored and exchanged between network enabled devices .\nKeeping track of ( or managing ) such content remains challenging due to the sheer volume of data .\nStoring `` live '' continuous media ( such as TV or radio content ) adds to the complexity in that this content has no well defined start or end and is therefore cumbersome to deal with .\nNetworked storage allows content that is logically viewed as part of the same collection to in fact be distributed across a network , making the task of content management all but impossible to deal with without a content management system .\nIn this paper we present the design and implementation of the Spectrum content management system , which deals with rich media content effectively in this environment .\nSpectrum has a modular architecture that allows its application to both stand-alone and various networked scenarios .\nA unique aspect of Spectrum is that it requires one ( or more ) retention policies to apply to every piece of content that is stored in the system .\nThis means that there are no eviction policies .\nContent that no longer has a retention policy applied to it is simply removed from the system .\nDifferent retention policies can easily be applied to the same content thus naturally facilitating sharing without duplication .\nThis approach also allows Spectrum to easily apply time based policies which are basic building blocks required to deal with the storage of live continuous media , to content .\nWe not only describe the details of the Spectrum architecture but also give typical use cases .\n1 .\nINTRODUCTION\nManipulating and managing content is and has always been one of the primary functions of a computer .\nInitial computing applications include text formatters and program compilers .\nContent was initially managed by explicit user interaction through the use of files and filesystems .\nAs technology has advanced , both the types of content and the way people wish to use it have greatly changed .\nNew content types such as continuous multimedia streams have become commonplace due to the convergence of advances in storage , encoding , and networking technologies .\nFor example , by combining improvements in storage and encoding , it is now possible to store many hours of TV-quality encoded video on a single disk drive .\nThis has led to the introduction of stand alone digital video recording or personal video recording ( PVR ) systems such as TiVO [ 8 ] and ReplayTV [ 7 ] .\nAnother example is the combination of encoding and broadband networking technology .\nThis combination has allowed users to access and share multimedia content in both local and remote area networks with the network itself acting as a huge data repository .\nThe proliferation of high quality content enabled by these advances in storage , encoding , and networking technology creates the need for new ways to manipulate and manage the data .\nThe focus of our work is on the storage of media rich content and in particular the storage of continuous media content in either pre-packaged or `` live '' forms .\nThe need for content management in this area is apparent when one consider the following : \u2022 Increases in the capacity and decreases in the cost of storage means that even modest desktop systems today have the ability to store massive amounts of content .\nManaging such content manually ( or more correctly manual `` non-management '' of such content ) lead to great inefficiencies where `` unwanted '' and forgotten content waste storage and where `` wanted '' content can not be found .\n\u2022 While true for all types of content the storage of continuous media content is especially problematic .\nFirst continuous media content is still very demanding in terms of storage resources which means that a policy-less approach to storing it will not work for all but the smallest systems .\nSecond , the storing of `` live '' content such as TV or radio is inherently problematic as these signals are continuous streams with no endpoints .\nThis means that before one can even think about managing such content there is a need to abstract it into something that could be manipulated and managed .\n.\nWhen dealing with stored continuous media there is a need to manage such content at both a fine-grained as well as an aggregate level .\nFor example , an individual PVR user wanting to keep only the highlights of a particular sporting event should not be required to have to store the content pertaining to the complete event .\nAt the same time the user might want to think of content in the aggregate , e.g. remove all of the content that I have not watched for the last month except that content which was explicitly marked for archival .\n.\nAs indicated above , trying to keep track of content on a standalone system without a content management system is very difficult .\nHowever , when the actual storage devices are distributed across a network the task of keeping track of content is almost impossible .\nThis scenario is increasingly common in network based content distribution systems and is likely to also become important in home-networking scenarios .\nIt would seem clear then that a content management system that can efficiently handle media rich content while also exploiting the networked capability of storage devices is needed .\nThis system should allow efficient storage of and access to content across heterogeneous network storage devices according to user preferences .\nThe content management system should translate user preferences into appropriate low-level storage policies and should allow those preferences to be expressed at a fine level of granularity ( while not requiring it in general ) .\nThe content management system should allow the user to manipulate and reason about ( i.e. change the storage policy associated with ) the storage of ( parts of ) continuous media content .\nAddressing this distributed content management problem is difficult due to the number of requirements placed on the system .\nFor example : .\nThe content management system must operate on a large number of heterogeneous systems .\nIn some cases the system may be managing content stored on a local filesystem , while in others the content may be stored on a separate network storage appliance .\nThe content manager may be responsible for implementing the policies it uses to reference content or that role may be delegated to a separate computer .\nA application program interface ( API ) and associated network protocols are needed in order for the content management system to provide a uniform interface .\n.\nThe content management system should be flexible and be able to handle differing requirements for content management policies .\nThese policies reflect what content should be obtained , when it should be fetched , how long it should be retained , and under what circumstances it should be discarded .\nThis means that the content management system should allow multiple applications to reference content with a rich set of policies and that it should all work together seamlessly .\n.\nThe content management system needs to be able to monitor references for content and use that information to place content in the right location in the network for efficient application access .\n.\nThe content management system must handle the interaction between implicit and explicit population of content at the network edge .\n.\nThe content system must be able to efficiently manage large sets of content , including continuous streams .\nIt needs to be able to package this content in such a way that it is convenient for users to access .\nTo address these issues we have designed and implemented the Spectrum content management system architecture .\nOur layered architecture is flexible -- its API allows the layers to reside either on a single computer or on multiple networked heterogeneous computers .\nIt allows multiple applications to reference content using differing policies .\nNote that the Spectrum architecture assumes the existence of a content distribution network ( CDN ) that can facilitate the efficient distribution of content ( for example , the PRISM CDN architecture [ 2 ] ) .\nThe rest of this paper is organized as follows .\nSection 2 describes the architecture of our content management system .\nIn Section 3 we describe both our implementation of the Spectrum architecture and examples of its use .\nRelated work is described in Section 4 , and Section 5 contains our conclusion and suggestions for future work .\n2 .\nTHE SPECTRUM DISTRIBUTED CONTENT MANAGEMENT SYSTEM ARCHITECTURE\nThe Spectrum architecture consists of three distinct management layers that may or may not be distributed across multiple machines , as shown in Figure 1 .\nThe three layers are : content manager : contains application specific information that is used to manage all of an application 's content according to user preferences .\nFor example , in a personal video recorder ( PVR ) application the content manager receives requests for content from a user interface and interacts with the lower layers of the Spectrum architecture to store and manage content on the device .\npolicy manager : implements and enforces various storage polices that the content manager uses to refer to content .\nThe policy manager exports an interface to the content manager that allows the content manager to request that a piece content be treated according to a specific policy .\nSpectrum allows for arbitrary policies to be realized by providing a fixed set of base-policy templates that can easily be parameterized .\nIt is our belief that for most implementations this will be adequate ( if not , Spectrum can easily be extended to dynamically load new base-policy template code at run time ) .\nA key aspect of the policy manager is that it allows different policies to be simultaneously applied to the same content ( or parts of the same content ) .\nFurthermore content can only exist in the system so long as it is referenced by at least one existing policy .\nPolicy conflicts are eliminated by having the policy manager deal exclusively with retention policies rather than with a mix of retention and eviction policies .\nThis means that content with no policy associated with it is immediately and automatically removed from the system .\nThis approach allows us to naturally support sharing of content across different policies which is critical to the efficient storage of large objects .\nNote that a key difference between the content manager and the policy manager is that the content manager manages references to multiple pieces of content , i.e. it has an `` applicationview '' of content .\nOn the other hand , the policy manager is only concerned with the policy used to manage `` standalone '' pieces of content .\nFor example , in a PVR application , the content manager layer would know about the different groups of managed content such as `` keep-indefinitely , '' `` keep for one day , '' and `` keep if available diskspace . ''\nHowever , at the policy manager level , each piece of content has\nFigure 1 : The components of the Spectrum architecture and the four ways they can be configured\nits own policy ( or policies ) applied to it and is independent from other content .\nstorage manager : stores content in an efficient manner while facilitating the objectives of the higher layers .\nSpecifically the storage manager stores content in sub-object `` chunks . ''\nThis approach has advantages for the efficient retrieval of content but more importantly allows policies to be applied at a subobject level which is critically important when dealing with very large objects such as parts of continuous media , e.g. selected pieces of TV content being stored on a PVR .\nNote that the storage manager has no knowledge of the policies being used by the content and policy managers .\nAnother unique part of our approach is that the interfaces between the layers can either be local or distributed .\nFigure 1 shows the four possible cases .\nThe case on the far left of the Figure shows the simplest ( non-distributed ) case where all the layers are implemented on a single box .\nThis configuration would be used in selfcontained applications such as PVRs .\nThe next case over corresponds to the case where there is a centralized content manager that controls distributed storage devices each of which is responsible for implementing policy based storage .\nIn this case although the remote devices are controlled by the central manager they operate much more independently .\nFor example , once they receive `` instructions '' from the central manager they typically operate in autonomous fashion .\nAn example of this type of configuration is a content distribution network ( CDN ) that distributes and stores content based on a schedule determined by some centralized controller .\nFor example , the CDN could pre-populate edge devices with content that is expected to be very popular or distribute large files to branch offices during off-peak hours in a bandwidth constrained enterprise environment .\nAllowing a single policy manager to control several storage managers leads to the next combination of functions and the most distributed case .\nThe need for this sort of separation might occur for scalability reasons or when different specialized storage devices or appliances are required to be controlled by a single policy manager .\nThe final case shows a content manager combined with a policy manager controlling a remote storage manager .\nThis separation would be possible if the storage manager is somewhat autonomous and does not require continuous fine grained control by the policy manager .\nWe now examine the function of the three layers in detail .\n2.1 Content Manager\nThe content manager layer is the primary interface through which specific applications use the Spectrum architecture .\nAs such the content manager layer provides an API for the application to manipulate all aspects of the Spectrum architecture at different levels of granularity .\nThe content manager API has functions that handle : Physical devices : This set of functions allows physical storage devices to be added to Spectrum thereby putting them under control of the content manager and making the storage available to the system .\nPhysical devices can be local or remote -- this is the only place in the architecture where the application is required to be aware of this distinction .\nOnce a device is mapped into the application through this interface , the system tracks its type and location .\nUsers simply refer to the content through an application-provided label .\nStores : Stores are subsets of physical storage devices .\nThrough these functions an application can create a store on a physical device and assign resources ( e.g. disk space ) to it .\nStores can only be created in physical devices that are mapped into the system .\nPolicy Groups : Policy groups are the means whereby an application specifies , instantiates , and modifies the policies that are applied to Spectrum content .\nTypical usage of this set of functions is to select one of a small set of base policies and to parameterize this specific instance of the policy .\nPolicy groups are created within existing stores in the system .\nThe Spectrum architecture has policies that are normally associated with storage that aim to optimize disk usage .\nIn addition a set of policies that take a sophisticated time specification enable storage that is cognizant of time .\nFor example , a simple time-based policy could evict content from the system at a certain absolute or relative time .\nA slightly more involved time-based policy enabled by the Spectrum architecture could allow content to be stored in `` rolling window '' of a number of hours ( for example , the most recent N-number of hours is kept in the system ) .\nTime-based polices are of particular use when dealing with continuous content like a live broadcast .\nContent : At the finest level of granularity content can be added to or removed from the system .\nContent is specified to the system by means of a uniform resource locator ( URL ) which concisely indicates the location of the content as well as the protocol to be used to retrieve it .\nOptionally a time specification can be associated with content .\nThis allows content to be fetched into the system at some future time , or at future time intervals .\nAgain , this is particularly useful for dealing with the storage and management of live content .\n2.2 Policy Manager\nThe policy manager layer of the Spectrum architecture has two main types of API functions .\nFirst , there are functions that operate on managed storage areas and policy-based references ( prefs ) to content stored there .\nSecond , there are sets of functions used to implement each management policy .\nThe first class of functions is used by the content manager layer to access storage .\nOperations include : create , open , and close : These operations are used by the content manager to control its access to storage .\nThe policy manager 's create operation is used to establish contact with a store for the first time .\nOnce this is done , the store can be open and closed using the appropriate routines .\nNote that the parameters used to create a store contain information on how to reach it .\nFor example , local stores have a path associated with them , while remote stores have a remote host and remote path associated with them .\nThe information only needs to be passed to the policy manager once at create time .\nFor open operations , the policy manager will use cached information to contact the store .\nlookup : The lookup operation provides a way for the content manager to query the policy manager about what content is currently present for a given URL .\nFor continuous media time ranges of present media will be returned .\nresource : The resource routines are used to query the policy manager about its current resource usage .\nThere are two resource routines : one that applies to the store as a whole and another that applies to a particular policy reference .\nThe resource API is extensible , we currently support queries on disk usage and I/O load .\npref establish/update : The pref establish operation is used by the content manager to reference content on the store .\nIf the content is not present , this call will result in the content being fetched ( or being scheduled to be fetched if the content is not currently available ) .\nParameters of this function include the URL to store it under , the URL to fetch data from if it is not present , the policy to store the content under , and the arguments used to parameterize the policy .\nThe result of a successful pref establish operation is a policy reference ID string .\nThis ID can be used with the update operation to either change the storage policy parameters or delete the reference entirely .\nThe second group of policy manager functions are used to implement all the polices supported by Spectrum .\nWe envision a small set of base-level policy functions that can be parameterized to produce a wide range of storage polices .\nFor example , a policy that implements recording a repeating time window can be parameterized to function daily , weekly , or monthly .\nNote that the policy manager is only concerned with executing a specific policy .\nThe higher-level reasons for choosing a given policy are handled by the content and application manager .\nA base policy is implemented using six functions : establish : called when a pref is established with the required URLs and base policy 's parameters .\nThe establish routine references any content already present in the store and then determines the next time it needs to take action ( e.g. start a download ) and schedules a callback for that time .\nIt can also register to receive callbacks if new content is received for a given URL .\nupdate : called to change the parameters of a pref , or to discard the policy reference .\nnewclip : called when a chunk of new content is received for a URL of interest .\nThe base policy typically arranges for newclip to be called for a given URL when the pref is established .\nWhen newclip is called , the base policy checks its parameters to determine if it wishes to add a reference to the clip just received .\ncallback : called when the pref schedules a timer-based callback .\nThis is a useful wakeup mechanism for prefs that need to be idle for a long period of time ( e.g. between programs ) .\nboot/shutdown : called when the content management system is booting or shutting down .\nThe boot operation is typically used to schedule initial callbacks or start I/O operations .\nThe shutdown operation is used to gracefully shutdown I/O streams and save state .\n2.3 Storage Manager\nThe role of Spectrum 's storage manager is to control all I/O operations associated with a given store .\nSpectrum 's storage manager supports storing content both on a local filesystem and on a remote fileserver ( e.g. a storage appliance ) .\nFor continuous media , at the storage manager level content is stored as a collection of time-based chunks .\nDepending on the underlying filesystem , a chunk could correspond to a single file or a data node in a storage database .\nThe two main storage manager operations are input and output .\nThe input routine is used to store content in a store under a given name .\nThe output routine is used to send data from the store to a client .\nFor streaming media both the input and output routines take time ranges that schedule when the I/O operation should happen , and both routines return an I/O handle that can be used to modify or cancel the I/O request in the future .\nMuch like the policy manager , the storage manager also provides API functions to create , open , and close stores .\nIt also supports operations to query the resource usages and options supported by the store .\nFinally , the storage manager also has a discard routine that may be used by the policy manager to inform the store to remove content from the store .\n3 .\nIMPLEMENTATION AND USE CASES\nIn this section we describe our implementation of Spectrum and describe how it can be used .\n3.1 Implementation\nWe have implemented Spectrum 's three layers in C as part of a library that can be linked with Spectrum-based applications .\nEach layer keeps track of its state through a set of local data files that persist across reboots , thus allowing Spectrum to smoothly handle power cycles .\nFor layers that reside on remote systems ( e.g. a remote store ) only the meta-information needed to contact the remote\nFigure 2 : Spectrum in a Network Enabled DVR\nnode is stored locally .\nOur test application uses a local policy and storage manager to fetch content and store it in a normal Unixbased filesystem .\nTo efficiently handle communications with layers running on remote systems , all Spectrum 's API calls support both synchronous and asynchronous modes through a uniform interface defined by the reqinfo structure .\nEach API call takes a pointer to a reqinfo structure as one of its arguments .\nThis structure is used to hold the call state and return status .\nFor async calls , the reqinfo also contains a pointer to a callback function .\nTo use a Spectrum API function , the caller first chooses either the sync or async mode and allocates a reqinfo structure .\nFor sync calls , the reqinfo can be allocated on the stack , otherwise it is allocated with malloc .\nFor async calls , a callback function must be provided when the reqinfo is allocated .\nNext the caller invokes the desired Spectrum API function passing the reqinfo structure as an argument .\nFor sync calls , the result of the calls is returned immediately in the reqinfo structure .\nFor successful async calls , a `` call in progress '' value is returned .\nLater , when the async call completes or a timeout occurs , the async callback function is called with the appropriate information needed to complete processing .\nThe modular/layered design of the Spectrum architecture simplifies the objective of distribution of functionality .\nFurthermore , communication between functions is typically of a `` master-slave ( s ) '' nature .\nThis means that several approaches to distributed operation are possible that would satisfy the architectural requirements .\nIn our implementation we have opted to realize this functionality with a simple modular design .\nWe provide a set of asynchronous remote access stub routines that allow users to select the transport protocol to use and to select the encoding method that should be used with the data to be transferred .\nTransport protocols can range simple protocols such as UDP up to more complex protocols such as HTTP .\nWe currently are using plain TCP for most of our transport .\nFunction calls across the different Spectrum APIs can be encoded using a variety of formats include plain text , XDR , and XML .\nWe are currently using the eXpat XML library [ 4 ] to encode our calls .\nWhile we are current transferring our XML encoded messages using a simple TCP connection , in a real world setting this can easily be replaced with an implementation based on secure sockets layer ( SSL ) to improve security by adding SSL as a transport protocol .\nAn important aspect of Spectrum is that it can manage content based on a given policy across heterogenous platforms .\nAs we explained previously in Section 2.2 , envision a small set of base-level policy functions that can be parameterized to produce a wide range of storage polices .\nIn order for this to work properly , all Spectrumbased applications must understand the base-level policies and how they can be parameterized .\nTo address this issue , we treat each base-level policy as if it was a separate program .\nEach base-level policy should have a well known name and command `` line '' options for parameterization .\nIn fact , in our implementation we pass parameters to base-level policies as a string that can be parsed using a getopt-like function .\nThis format is easily understood and provides portability since byte order is not an issue in a string .\nSince this part of Spectrum is not on the critical data path , this type of formatting is not a performance issue .\n3.2 Using the Spectrum Content Management System\nIn this section we show two examples of the use of the Spectrum Content Management System in our environment .\nThe focus of our previous work has been content distribution for streaming media content [ 2 ] and network enabled digital video recording [ 3 ] .\nThe Spectrum system is applicable to both scenarios as follows .\nFigure 2 shows the Network Enabled DVR ( NED ) architecture .\nIn this case all layers of the Spectrum architecture reside on the same physical device in a local configuration .\nThe DVR application obtains program listings from some network source , deals with user presentation through a graphical user interface ( GUI ) , and interface with the Spectrum system through the content management layer APIs .\nThis combination of higher level functions allows the user to select both content to be stored and what storage policies to\nFigure 3 : Spectrum in a Content Distribution Architecture\napply to such content .\nObtaining the content ( through the network or locally ) and the subsequent storage on the local system is then handled by the policy and storage managers .\nThe use of Spectrum in a streaming content distribution architecture ( e.g. PRISM [ 2 ] ) is depicted in Figure 3 .\nIn this environment streaming media content ( both live , canned-live and on-demand ) is being distributed to edge portals from where streaming endpoints are being served .\nIn our environment content distribution and storage is done from a centralized content management station which controls several of the edge portals .\nThe centralized station allows administrators to manage the distribution and storage of content without requiring continuous communication between the content manager and the edge devices , i.e. once `` instructions '' have been given to edge devices they can operate independently until changes are to be made .\n3.3 Spectrum Operational Example\nTo illustrate how Spectrum handles references to content , consider a Spectrum-based PVR application programmed to store one days worth of streaming content in a rolling window .\nTo set up the rolling window , the application would use the content manager API to create a policy group and policy reference to the desired content .\nThe establishment of the one-day rolling window policy reference would cause the policy manger to ask the storage manager to start receiving the stream .\nAs each chunk of streaming data arrives , the policy manager executes the policy reference 's `` newclip '' function .\nThe `` newclip '' function adds a reference to each arriving chunk , and schedules a callback a day later .\nAt that time , the policy will drop its now day-old reference to the content and the content will be discarded unless it is referenced by some other policy .\nNow , consider the case where the user decides to save part of the content ( e.g. a specific program ) in the rolling window for an extra week .\nTo do this , the application requests that the content manager add an additional new policy reference to the part of the content to preserved .\nThus , the preserved content has two references to it : one from the rolling window and one from the request to preserve the content for an additional week .\nAfter one day the reference from the rolling window will be discarded , but the content will be\nFigure 4 : Data layout of Spectrum policy store\npreserved by the second reference .\nAfter the additional week has past , the callback function for the second reference will be called .\nThis function will discard the remaining reference to the content and as there are no remaining references the content will be freed .\nIn order to function in scenarios like the ones described above , Spectrum 's policy manager must manage and maintain all the references to various chunks of media .\nThese references are persistent and thus must be able to survive even if the machine maintaining them is rebooted .\nOur Spectrum policy manager implementation accomplishes this using the file and directory structure shown in Figure 4 .\nThere are three classes of data stored , and each class has its own top level directory .\nThe directories are : data : this directory is used by the storage manager to store each active URL 's chunks of media .\nThe media files can be encoded in any format , for example MPEG , Windows Media , or QuickTime .\nNote that this directory is used only if the storage manager is local .\nIf the policy manager is using an external storage manager ( e.g. a storage appliance ) , then the media files are stored remotely and are only remotely referenced by the policy manager .\nmeta : this directory contains general meta information about the storage manager being used and the data it is storing .\nGeneral information is stored in the store subdirectory and includes the location of the store ( local or remote ) and information about the types of chunks of data the store can handle .\nThe meta directory also contains a subdirectory per-URL that contains information about the chunks of data stored .\nThe chunks file contains a list of chunks currently stored and their reference counts .\nThe prefs file contains a list of active policy references that point to this URL .\nThe ranges file contains a list of time ranges of data currently stored .\nFinally , the media file describes the format of the media being stored under the current URL .\npoly : this directory contains a set of host subdirectories .\nEach host subdirectory contains the set of policy references created by that host .\nInformation on each policy reference is broken up into three files .\nFor example , a policy reference named ref1 would be stored in ref1 , ref1.files , and ref1.state .\nThe ref1 file contains information about the policy reference that does not change frequently .\nThis information includes the base-policy and the parameters used to create the reference .\nThe ref1.files file contains the list of references to chunks that pref ref1 owns .\nFinally , the ref1.state file contains optional policy-specific state information that can change over time .\nTogether , these files and directories are used to track references in our implementation of Spectrum .\nNote that other implementations are possible .\nFor example , a carrier-grade Spectrum manager might store all its policy and reference information in a high-performance database system .\n4 .\nRELATED WORK\nSeveral authors have addressed the problem of the management of content in distributed networks .\nMuch of the work focuses on the policy management aspect .\nFor example in [ 5 ] , the problem of serving multimedia content via distributed servers is considered .\nContent is distributed among server resources in proportion to user demand using a Demand Dissemination Protocol .\nThe performance of the scheme is benchmarked via simulation .\nIn [ 1 ] content is distributed among sub-caches .\nThe authors construct a system employing various components , such as a Central Router , Cache Knowledge base , Subcaches , and a Subcache eviction judge .\nThe Cache Knowledge base allows sophisticated policies to be employed .\nSimulation is used to compare the proposed scheme with well-known replacement algorithms .\nOur work differs in that we are considering more than the policy management aspects of the problem .\nAfter carefully considering the required functionality to implement content management in the networked environment , we have partitioned the system into three simple functions , namely Content manager , Policy manager and Storage manager .\nThis has allowed us to easily implement and experiment with a prototype system .\nOther related work involves so called TV recommendation systems which are used in PVRs to automatically select content for users , e.g. [ 6 ] .\nIn the case where Spectrum is used in a PVR configuration this type of system would perform a higher level function and could clearly benefit from the functionalities of the Spectrum architecture .\nFinally , in the commercial CDN environment vendors ( e.g. Cisco and Netapp ) have developed and implemented content management products and tools .\nUnlike the Spectrum architecture which allows edge devices to operate in a largely autonomous fashion , the vendor solutions typically are more tightly coupled to a centralized controller and do not have the sophisticated time-based operations offered by Spectrum .\n5 .\nCONCLUSION AND FUTURE WORK\nIn this paper we presented the design and implementation of the Spectrum content management architecture .\nSpectrum allows storage policies to be applied to large volumes of content to facilitate efficient storage .\nSpecifically , the system allows different policies to be applied to the same content without replication .\nSpectrum can also apply policies that are `` time-aware '' which effectively deals with the storage of continuous media content .\nFinally , the modular design of the Spectrum architecture allows both stand-alone and distributed realizations so that the system can be deployed in a variety of applications .\nThere are a number of open issues that will require future work .\nSome of these issues include :\n\u2022 We envision Spectrum being able to manage content on systems ranging from large CDNs down to smaller appliances such as TiVO [ 8 ] .\nIn order for these smaller systems to support Spectrum they will require networking and an external API .\nWhen that API becomes available , we will have to work out how it can be fit into the Spectrum architecture .\n\u2022 Spectrum names content by URL , but we have intentionally\nnot defined the format of Spectrum URLs , how they map back to the content 's actual name , or how the names and URLs should be presented to the user .\nWhile we previously touched on these issues elsewhere [ 2 ] , we believe there is more work to be done and that consensus-based standards on naming need to be written .\n\u2022 In this paper we 've focused on content management for continuous media objects .\nWe also believe the Spectrum architecture can be applied to any type of document including plain files , but we have yet to work out the details necessary to support this in our prototype environment .\n\u2022 Any project that helps allow multimedia content to be easily shared over the Internet will have legal hurdles to overcome before it can achieve widespread acceptance .\nAdapting Spectrum to meet legal requirements will likely require more technical work ."} {"id": "I-22", "title": "", "abstract": "", "keyphrases": ["share belief map", "multiag teamwork", "heurist", "reason", "problem-solv", "collabor", "teamwork", "expect", "teamwork schema", "human-agent team perform", "cognit load theori", "human perform", "resourc alloc", "task perform", "info-share", "multi-parti commun", "cognit model", "human-center teamwork", "share belief map"], "prmu": [], "lvl-1": "Realistic Cognitive Load Modeling for Enhancing Shared Mental Models in Human-Agent Collaboration Xiaocong Fan College of Information Sciences and Technology The Pennsylvania State University University Park, PA 16802 zfan@ist.psu.edu John Yen College of Information Sciences and Technology The Pennsylvania State University University Park, PA 16802 jyen@ist.psu.edu ABSTRACT Human team members often develop shared expectations to predict each other``s needs and coordinate their behaviors.\nIn this paper the concept Shared Belief Map is proposed as a basis for developing realistic shared expectations among a team of Human-Agent-Pairs (HAPs).\nThe establishment of shared belief maps relies on inter-agent information sharing, the effectiveness of which highly depends on agents'' processing loads and the instantaneous cognitive loads of their human partners.\nWe investigate HMM-based cognitive load models to facilitate team members to share the right information with the right party at the right time.\nThe shared belief map concept and the cognitive/processing load models have been implemented in a cognitive agent architectureSMMall.\nA series of experiments were conducted to evaluate the concept, the models, and their impacts on the evolving of shared mental models of HAP teams.\nCategories and Subject Descriptors I.2.11 [Artificial Intelligence]: Distributed Artificial Intelligence-Intelligent agents, Multiagent systems General Terms Design, Experimentation, Human Factors 1.\nINTRODUCTION The entire movement of agent paradigm was spawned, at least in part, by the perceived importance of fostering human-like adjustable autonomy.\nHuman-centered multiagent teamwork has thus attracted increasing attentions in multi-agent systems field [2, 10, 4].\nHumans and autonomous systems (agents) are generally thought to be complementary: while humans are limited by their cognitive capacity in information processing, they are superior in spatial, heuristic, and analogical reasoning; autonomous systems can continuously learn expertise and tacit problem-solving knowledge from humans to improve system performance.\nIn short, humans and agents can team together to achieve better performance, given that they could establish certain mutual awareness to coordinate their mixed-initiative activities.\nHowever, the foundation of human-agent collaboration keeps being challenged because of nonrealistic modeling of mutual awareness of the state of affairs.\nIn particular, few researchers look beyond to assess the principles of modeling shared mental constructs between a human and his/her assisting agent.\nMoreover, human-agent relationships can go beyond partners to teams.\nMany informational processing limitations of individuals can be alleviated by having a group perform tasks.\nAlthough groups also can create additional costs centered on communication, resolution of conflict, and social acceptance, it is suggested that such limitations can be overcome if people have shared cognitive structures for interpreting task and social requirements [8].\nTherefore, there is a clear demand for investigations to broaden and deepen our understanding on the principles of shared mental modeling among members of a mixed human-agent team.\nThere are lines of research on multi-agent teamwork, both theoretically and empirically.\nFor instance, Joint Intention [3] and SharedPlans [5] are two theoretical frameworks for specifying agent collaborations.\nOne of the drawbacks is that, although both have a deep philosophical and cognitive root, they do not accommodate the modeling of human team members.\nCognitive studies suggested that teams which have shared mental models are expected to have common expectations of the task and team, which allow them to predict the behavior and resource needs of team members more accurately [14, 6].\nCannon-Bowers et al. [14] explicitly argue that team members should hold compatible models that lead to common expectations.\nWe agree on this and believe that the establishment of shared expectations among human and agent team members is a critical step to advance human-centered teamwork research.\nIt has to be noted that the concept of shared expectation can broadly include role assignment and its dynamics, teamwork schemas and progresses, communication patterns and intentions, etc..\nWhile the long-term goal of our research is to understand how shared cognitive structures can enhance human-agent team performance, the specific objective of the work reported here is to develop a computational cognitive 395 978-81-904262-7-5 (RPS) c 2007 IFAAMAS capacity model to facilitate the establishment of shared expectations.\nIn particular, we argue that to favor humanagent collaboration, an agent system should be designed to allow the estimation and prediction of human teammates'' (relative) cognitive loads, and use that to offer improvised, unintrusive help.\nIdeally, being able to predict the cognitive/processing capacity curves of teammates could allow a team member to help the right party at the right time, avoiding unbalanced work/cognitive loads among the team.\nThe last point is on the modeling itself.\nAlthough an agent``s cognitive model of its human peer is not necessarily to be descriptively accurate, having at least a realistic model can be beneficial in offering unintrusive help, bias reduction, as well as trustable and self-adjustable autonomy.\nFor example, although humans'' use of cognitive simplification mechanisms (e.g., heuristics) does not always lead to errors in judgment, it can lead to predictable biases in responses [8].\nIt is feasible to develop agents as cognitive aids to alleviate humans'' biases, as long as an agent can be trained to obtain a model of a human``s cognitive inclination.\nWith a realistic human cognitive model, an agent can also better adjust its automation level.\nWhen its human peer is becoming overloaded, an agent can take over resource-consuming tasks, shifting the human``s limited cognitive resources to tasks where a human``s role is indispensable.\nWhen its human peer is underloaded, an agent can take the chance to observe the human``s operations to refine its cognitive model of the human.\nMany studies have documented that human choices and behaviors do not agree with predictions from rational models.\nIf agents could make recommendations in ways that humans appreciate, it would be easier to establish trust relationships between agents and humans; this in turn, will encourage humans'' automation uses.\nThe rest of the paper is organized as follows.\nIn Section 2 we review cognitive load theories and measurements.\nA HMM-based cognitive load model is given in Section 3 to support resource-bounded teamwork among human-agentpairs.\nSection 4 describes the key concept shared belief map as implemented in SMMall, and Section 5 reports the experiments for evaluating the cognitive models and their impacts on the evolving of shared mental models.\n2.\nCOGNITIVE CAPACITY-OVERVIEW People are information processors.\nMost cognitive scientists [8] believe that human information-processing system consists of an executive component and three main information stores: (a) sensory store, which receives and retains information for one second or so; (b) working (or shortterm) memory, which refers to the limited capacity to hold (approximately seven elements at any one time [9]), retain (for several seconds), and manipulate (two or three information elements simultaneously) information; and (c) longterm memory, which has virtually unlimited capacity [1] and contains a huge amount of accumulated knowledge organized as schemata.\nCognitive load studies are, by and large, concerned about working memory capacity and how to circumvent its limitations in human problem-solving activities such as learning and decision making.\nAccording to the cognitive load theory [11], cognitive load is defined as a multidimensional construct representing the load that a particular task imposes on the performer.\nIt has a causal dimension including causal factors that can be characteristics of the subject (e.g. expertise level), the task (e.g. task complexity, time pressure), the environment (e.g. noise), and their mutual relations.\nIt also has an assessment dimension reflecting the measurable concepts of mental load (imposed exclusively by the task and environmental demands), mental effort (the cognitive capacity actually allocated to the task), and performance.\nLang``s information-processing model [7] consists of three major processes: encoding, storage, and retrieval.\nThe encoding process selectively maps messages in sensory stores that are relevant to a person``s goals into working memory; the storage process consolidates the newly encoded information into chunks, and form associations and schema to facilitate subsequent recalls; the retrieval process searches the associated memory network for a specific element/schema and reactivates it into working memory.\nThe model suggests that processing resources (cognitive capacity) are independently allocated to the three processes.\nIn addition, working memory is used both for holding and for processing information [1].\nDue to limited capacity, when greater effort is required to process information, less capacity remains for the storage of information.\nHence, the allocation of the limited cognitive resources has to be balanced in order to enhance human performance.\nThis comes to the issue of measuring cognitive load, which has proven difficult for cognitive scientists.\nCognitive load can be assessed by measuring mental load, mental effort, and performance using rating scales, psychophysiological (e.g. measures of heart activity, brain activity, eye activity), and secondary task techniques [12].\nSelfratings may appear questionable and restricted, especially when instantaneous load needs to be measured over time.\nAlthough physiological measures are sometimes highly sensitive for tracking fluctuating levels of cognitive load, costs and work place conditions often favor task- and performancebased techniques, which involve the measure of a secondary task as well as the primary task under consideration.\nSecondary task techniques are based on the assumption that performance on a secondary task reflects the level of cognitive load imposed by a primary task [15].\nFrom the resource allocation perspective, assuming a fixed cognitive capacity, any increase in cognitive resources required by the primary task must inevitably decrease resources available for the secondary task [7].\nConsequently, performance in a secondary task deteriorates as the difficulty or priority of the primary task increases.\nThe level of cognitive load can thus be manifested by the secondary task performance: the subject is getting overloaded if the secondary task performance drops.\nA secondary task can be as simple as detecting a visual or auditory signal but requires sustained attention.\nIts performance can be measured in terms of reaction time, accuracy, and error rate.\nHowever, one important drawback of secondary task performance, as noted by Paas [12], is that it can interfere considerably with the primary task (competing for limited capacity), especially when the primary task is complex.\nTo better understand and measure cognitive load, Xie and Salvendy [16] introduced a conceptual framework, which distinguishes instantaneous load, peak load, accumulated load, average load, and overall load.\nIt seems that the notation of instantaneous load, which represents the dynamics of cognitive load over time, is especially useful for monitoring the fluctuation trend so that free capacity can be exploited at the most appropriate time to enhance the overall performance in human-agent collaborations.\n396 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) Agent n Human Human-Agent Pair n Agent 1 Human Human-Agent Pair 1 Teammates Agent Processing Model Agent Comm Model Human Partner HAI Agent Processing Model Agent Comm Model Human Partner HAI Teammates Figure 1: Human-centered teamwork model.\n3.\nHUMAN-CENTERED TEAMWORK MODEL People are limited information processors, and so are intelligent agent systems; this is especially true when they act under hard or soft timing constraints imposed by the domain problems.\nIn respect to our goal to build realistic expectations among teammates, we take two important steps.\nFirst, agents are resource-bounded; their processing capacity is limited by computing resources, inference knowledge, concurrent tasking capability, etc..\nWe withdraw the assumption that an agent knows all the information/intentions communicated from other teammates.\nInstead, we contend that due to limited processing capacity, an agent may only have opportunities to process (make sense of) a portion of the incoming information, with the rest ignored.\nTaking this approach will largely change the way in which an agent views (models) the involvement and cooperativeness of its teammates in a team activity.\nIn other words, the establishment of shared mental models regarding team members'' beliefs, intentions, and responsibilities can no longer rely on inter-agent communication only.\nThis being said, we are not dropping the assumption that teammates are trustable.\nWe still stick to this, only that team members cannot overtrust each other; an agent has to consider the possibility that its information being shared with others might not be as effective as expected due to the recipients'' limited processing capacities.\nSecond, human teammates are bounded by their cognitive capacities.\nAs far as we know, the research reported here is the first attempt in the area of humancentered multi-agent teamwork that really considers building and using human``s cognitive load model to facilitate teamwork involving both humans and agents.\nWe use Hi, Ai to denote Human-Agent-Pair (HAP) i. 3.1 Computational Cognitive Capacity Model An intelligent agent being a cognitive aid, it is desirable that the model of its human partner implemented within the agent is cognitively-acceptable, if not descriptively accurate.\nOf course, building a cognitive load model that is cognitively-acceptable is not trivial; there exist a variety of cognitive load theories and different measuring techniques.\nWe here choose to focus on the performance variables of secondary tasks, given the ample evidence supporting secondary task performance as a highly sensitive and reliable technique for measuring human``s cognitive load [12].\nIt``s worth noting that just for the purpose of estimating a human subject``s cognitive load, any artificial task (e.g, pressing a button in response to unpredictable stimuli) can be used as a secondary task to force the subject to go through.\nHowever, in a realistic application, we have to make sure that the selected secondary task interacts with the primary task in meaningful ways, which is not easy and often depends on the domain problem at hand.\nFor example, in the experiment below, we used the number of newly available information correctly recalled as the secondary task, and the effective0 1 2 3 4 negligibly slightly fairly heavily overly 0.4 0.4 0.4 0.4 0.6 0.4 0.2 0.1 0.2 0.3 0.2 0.2 0.1 0.1 0.25 0.25 0.1 0.2 0.2 0 1 2 3 4 5 6 7 8 \u2265 9 B = 0 1 2 3 4 \u23a1 \u23a2 \u23a2 \u23a2 \u23a2 \u23a3 0 0 0 0 0 0.02 0.03 0.05 0.1 0.8 0 0 0 0 0 0.05 0.05 0.1 0.7 0.1 0 0 0 0 0.01 0.02 0.45 0.4 0.1 0.02 0.02 0.03 0.05 0.15 0.4 0.3 0.03 0.02 0 0 0.1 0.3 0.3 0.2 0.1 0 0 0 0 0 \u23a4 \u23a5 \u23a5 \u23a5 \u23a5 \u23a6 Figure 2: A HMM Cognitive Load Model.\nness of information sharing as the primary task.\nThis is realistic to intelligence workers because in time stress situations they have to know what information to share in order to effectively establish an awareness of the global picture.\nIn the following, we adopt the Hidden Markov Model (HMM) approach [13] to model human``s cognitive capacity.\nIt is thus assumed that at each time step the secondary task performance of a human subject in a team composed of human-agent-pairs (HAP) is observable to all the team members.\nHuman team members'' secondary task performance is used for estimating their hidden cognitive loads.\nA HMM is denoted by \u03bb = N, V, A, B, \u03c0 , where N is a set of hidden states, V is a set of observation symbols, A is a set of state transition probability distributions, B is a set of observation symbol probability distributions (one for each hidden state), and \u03c0 is the initial state distribution.\nWe consider a 5-state HMM model of human cognitive load as follows (Figure 2).\nThe hidden states are 0 (negligiblyloaded), 1 (slightly-loaded), 2 (fairly-loaded), 3 (heavilyloaded), and 4 (overly loaded).\nThe observable states are tied with secondary task performance, which, in this study, is measured in terms of the number of items correctly recalled.\nAccording to Miller``s 7\u00b12 rule, the observable states take integer values from 0 to 9 ( the state is 9 when the number of items correctly recalled is no less than 9).\nFor the example B Matrix given in Fig. 2, it is very likely that the cognitive load of the subject is negligibly when the number of items correctly recalled is larger than 9.\nHowever, to determine the current hidden load status of a human partner is not trivial.\nThe model might be oversensitive if we only consider the last-step secondary task performance to locate the most likely hidden state.\nThere is ample evidence suggesting that human cognitive load is a continuous function over time and does not manifest sudden shifts unless there is a fundamental changes in tasking demands.\nTo address this issue, we place a constraint on the state transition coefficients: no jumps of more than 2 states are allowed.\nIn addition, we take the position that, a human subject is very likely overloaded if his secondary task performance is mostly low in recent time steps, while he is very likely not overloaded if his secondary task performance is mostly high recently.\nThis leads to the following Windowed-HMM approach.\nGiven a pre-trained HMM \u03bb of human cognitive load and the recent observation sequence Ot of length w, let parameter w be the effective window size, \u03b5\u03bb t be the estimated hidden state at time step t. First apply the HMM to the observation sequence to find the optimal sequence of hidden states S\u03bb t = s1s2 \u00b7 \u00b7 \u00b7 sw (Viterbi algorithm).\nThen, compute the estimated hidden state \u03b5\u03bb t for the current time step, viewing it as a function of S\u03bb t .\nWe consider all the hidden states in S\u03bb t , weighted by their respective distance to \u03b5\u03bb t\u22121 (the estimated state of the last step): the closer of a state in S\u03bb t The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 397 to \u03b5\u03bb t\u22121, the higher probability of the state being \u03b5\u03bb t .\n\u03b5\u03bb t is set to be the state with the highest probability (note that a state may have multiple appearances in S\u03bb t ).\nMore formally, the probability of state s \u2208 S being \u03b5\u03bb t is given by: p\u03bb(s, t) = s=sj \u2208S\u03bb t \u03b7(sj)e\u2212|sj \u2212\u03b5\u03bb t\u22121| , (1) where \u03b7(sj) = ej / w k=1 ek is the weight of sj \u2208 S\u03bb t (the most recent hidden state has the most significant influence in predicting the next state).\nThe estimated state for the current step is the state with maximum likelihood: \u03b5\u03bb t = argmax s\u2208S\u03bb t p\u03bb(s, t) (2) 3.2 Agent Processing Load Model According to schema theory [11], multiple elements of information can be chunked as single elements in cognitive schemas.\nA schema can hold a huge amount of information, yet is processed as a single unit.\nWe adapt this idea and assume that agent i``s estimation of agent j``s processing load at time step t is a function of two factors: the number of chunks cj(t) and the total number sj(t) of information being considered by agent j.\nIf cj(t) and sj(t) are observable to agent i, agent i can employ a Windowed-HMM approach as described in Section 3.1 to model and estimate agent j``s instantaneous processing load.\nIn the study reported below, we also used 5-state HMM models for agent processing load.\nWith the 5 hidden states similar to the HMM models adopted for human cognitive load, we employed multivariate Gaussian observation probability distributions for the hidden states.\n3.3 HAP``s Processing Load Model As discussed above, a Human-Agent-Pair (HAP) is viewed as a unit when teaming up with other HAPs.\nThe processing load of a HAP can thus be modeled as the co-effect of the processing load of the agent and the cognitive load of the human partner as captured by the agent.\nSuppose agent Ai has models for its processing load and its human partner Hi``s cognitive load.\nDenote the agent processing load and human cognitive load of HAP Hi, Ai at time step t by \u03bci t and \u03bdi t, respectively.\nAgent Ai can use \u03bci t and \u03bdi t to estimate the load of Hi, Ai as a whole.\nSimilarly, if \u03bcj t and \u03bdj t are observable to agent Ai, it can estimate the load of Hj, Aj .\nFor model simplicity, we still used 5-state HMM models for HAP processing load, with the estimated hidden states of the corresponding agent processing load and human cognitive load as the input observation vectors.\nBuilding a load estimation model is the means.\nThe goal is to use the model to enhance information sharing performance so that a team can form better shared mental models (e.g., to develop inter-agent role expectations in their collaboration), which is the key to high team performance.\n3.4 Load-Sensitive Information Processing Each agent has to adopt a certain strategy to process the incoming information.\nAs far as resource-bounded agents are concerned, it is of no use for an agent to share information with teammates who are already overloaded; they simply do not have the capacity to process the information.\nConsider the incoming information processing strategy as shown in Table 1.\nAgent Ai (of HAPi) ignores all the incoming information when it is overloaded, and processes all the incoming information when it is negligibly loaded.\nWhen it Table 1: Incoming information processing strategy HAPi Load Strategy Overly Ignore all shared info Heavily Consider every teammate A \u2208 [1, 1 q |Q| ], randomly process half amount of info from A; Ignore info from any teammate B \u2208 ( 1 q |Q|, |Q|] Fairly Process half of shared info from any teammate Slightly Process all info from any A \u2208 [1, 1 q |Q| ]; For any teammate B \u2208 ( 1 q |Q|, |Q|] randomly process half amount of info from B Negligibly Process all shared info HAPj Process all info from HAPj if it is overloaded *Q is a FIFO queue of agents from whom this HAP has received information at the current step; q is a constant known to all.\nis heavily loaded, Ai randomly processes half of the messages from those agents which are the first 1/q teammates appeared in its communication queue; when it is fairly loaded, Ai randomly processes half of the messages from any teammates; when it is slightly loaded, Ai processes all the messages from those agents which are the first 1/q teammates appeared in its communication queue, and randomly processes half of the messages from other teammates.\nTo further encourage sharing information at the right time, the last row of Table 1 says that HAPi , if having not sent information to HAPj who is currently overloaded, will process all the information from HAPj .\nThis can be justified from resource allocation perspective: an agent can reallocate its computing resource reserved for communication to enhance its capacity of processing information.\nThis strategy favors never sending information to an overloaded teammate, and it suggests that estimating and exploiting others'' loads can be critical to enable an agent to share the right information with the right party at the right time.\n4.\nSYSTEM IMPLEMENTATION SMMall (Shared Mental Models for all) is a cognitive agent architecture developed for supporting human-centric collaborative computing.\nIt stresses human``s role in team activities by means of novel collaborative concepts and multiple representations of context woven through all aspects of team work.\nHere we describe two components pertinent to the experiment reported in Section 5: multi-party communication and shared mental maps (a complete description of the SMMall system is beyond the scope of this paper).\n4.1 Multi-Party Communication Multi-party communication refers to conversations involving more than two parties.\nAside from the speaker, the listeners involved in a conversation can be classified into various roles such as addressees (the direct listeners), auditors (the intended listeners), overhearers (the unintended but anticipated listeners), and eavesdroppers (the unanticipated listeners).\nMulti-party communication is one of the characteristics of human teams.\nSMMall agents, which can form Human-Agent-Pairs with human partners, support multiparty communication with the following features.\n1.\nSMMall supports a collection of multi-party performatives such as MInform (multi-party inform), MAnnounce (multi-party announce), and MAsk (multi-party ask).\nThe listeners of a multi-party performative can be addressees, auditors, and overhearers, which correspond to `to'', `cc'', and `bcc'' in e-mail terms, respectively.\n2.\nSMMall supports channelled-communication.\nThere 398 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) are three built-in channels: agentTalk channel (inter-agent activity-specific communication), control channel (meta communication for team coordination), and world channel (communication with the external world).\nAn agent can fully tune to a channel to collect messages sent (or cc, bcc) to it.\nAn agent can also partially tune to a channel to get statistic information about the messages communicated over the channel.\nThis is particularly useful if an agent wants to know the communication load imposed on a teammate.\n4.2 Shared Belief Map & Load Display A concept shared belief map has been proposed and implemented into SMMall; this responds to the need to seek innovative perspectives or concepts that allow group members to effectively represent and reason about shared mental models at different levels of abstraction.\nAs described in Section 5, humans and agents interacted through shared belief maps in the evaluation of HMM-based load models.\nA shared belief map is a table with color-coded info-cellscells associated with information.\nEach row captures the belief model of one team member, and each column corresponds to a specific information type (all columns together define the boundary of the information space being considered).\nThus, info-cell Cij of a map encodes all the beliefs (instances) of information type j held by agent i. Color coding applies to each info-cell to indicate the number of information instances held by the corresponding agent.\nThe concept of shared belief map helps maintain and present a human partner with a synergy view of the shared mental models evolving within a team.\nBriefly, SMMall has implemented the concept with the following features: 1.\nA context menu can be popped up for each info-cell to view and share the associated information instances.\nIt allows selective (selected subset) or holistic info-sharing.\n2.\nMixed-initiative info-sharing: both agents and human partners can initiate a multi-party conversation.\nIt also allows third-party info-sharing, say, A shares the information held by B with C. 3.\nInformation types that are semantically related (e.g., by inference rules) can be closely organized.\nHence, nearby info-cells can form meaningful plateaus (or contour lines) of similar colors.\nColored plateaus indicate those sections of a shared mental model that bear high overlapping degrees.\n4.\nThe perceptible color (hue) difference manifested from a shared belief map indicates the information difference among team members, and hence visually represents the potential information needs of each team member (See Figure 3).\nSMMall has also implemented the HMM-based models (Section 3) to allow an agent to estimate its human partner``s and other team members'' cognitive/processing loads.\nAs shown in Fig. 3, below the shared belief map there is a load display for each team member.\nThere are 2 curves in a display: the blue (dark) one plots human``s instantaneous cognitive loads and the red one plots the processing loads of a HAP as a whole.\nIf there are n team members, each agent needs to maintain 2n HMM-based models to support the load displays.\nThe human partner of a HAP can adjust her cognitive load at runtime, as well as monitor another HAP``s agent processing load and its probability of processing the information she sends at the current time step.\nThus, the more closely a HAP can estimate the actual processing loads of other HAPs, the better information sharing performance the HAP can achieve.\nFigure 3: Shared Mental Map Display In sum, shared belief maps allow the inference of who needs what, and load displays allow the judgment of when to share information.\nTogether they allow us to investigate the impact of sharing the right info.\nwith the right party at the right time on the evolving of shared mental models.\n4.3 Metrics for Shared Mental Models We here describe how we measure team performance in our experiment.\nWe use mental model overlapping percentage (MMOP) as the base to measure shared mental models.\nMMOP of a group is defined as the intersection of all the individual mental states relative to the union of individual mental states of the group.\nFormally, given a group of k agents G = {Ai|1 \u2264 i \u2264 k}, let Bi = {Iim|1 \u2264 m \u2264 n} be the beliefs (information) held by agent Ai, where each Iim is a set of information of the same type, and n (the size of information space) is fixed for the agents in G, then MMOP(G) = 100 n 1\u2264m\u2264n ( | \u22291\u2264i\u2264k Iim| | \u222a1\u2264i\u2264k Iim| ).\n(3) First, a shared mental model can be measured in terms of the distance of averaged subgroup MMOPs to the MMOP of the whole group.\nWithout losing generality, we define paired SMM distance (subgroups of size 2) D2 as: D2 (G) = 1\u2264iTH1-6, TH2-8>TH1-8, TH2-10>TH1-10), and the performance difference of TH1 and TH2 teams increased as communication capacity increased.\nThis indicates that, other things being equal, the benefit of exploiting load estimation when sharing information becomes more significant when communication capacity is larger.\nFrom Fig. 4 the same findings can be derived for the performance of agent teams.\nIn addition, the results also show that the SMMs of each team type were maintained steadily at a certain level after about 20 time steps.\nHowever, to maintain a SMM steadily at a certain level is a non-trivial team task.\nThe performance of teams who did not share any information (the `NoSharing'' curve in Fig. 4) decreased constantly as time proceeded.\n5.4 Multi-Party Communication for SMM We now compare teams of type 2 and type 3 (which splits multi-party messages by receivers'' loads).\nAs plotted in Fig. 4, for HAP teams, the performance of team type 2 for each fixed communication capacity was consistently better than team type 3 (TH3-6\u2264TH2-6, TH3-8TH2>TH1 holds in Fig. 6(c) (larger distances indicate better subgroup SMMs), and TH3TH1>TH2 holds in Fig. 6(a), and TH2